Phased Ug
Phased Ug
User's Guide
R2020b
How to Contact MathWorks
Phone: 508-647-7000
Phased Arrays
iii
ULA Array Elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Array Element Responses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-3
Signal Delay Between Array Elements . . . . . . . . . . . . . . . . . . . . . . . 2-4
Steering Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-5
Array Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2-6
Reception of Plane Wave Across Array . . . . . . . . . . . . . . . . . . . . . . . 2-8
iv Contents
Pulses of Rectangular Waveform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-3
Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Transmitter Object . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-25
Phase Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4-27
v
Beamforming
5
Beamforming Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Conventional Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-2
Optimal and Adaptive Beamforming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5-3
Direction-of-Arrival Estimation
6
Beamscan Direction-of-Arrival Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 6-2
vi Contents
Space-Time Adaptive Processing (STAP)
7
Angle-Doppler Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7-2
Benefits of Visualizing Angle-Doppler Response . . . . . . . . . . . . . . . . . . . . 7-2
Angle-Doppler Response of Stationary Array to Stationary Target . . . . . . . 7-2
Angle-Doppler Response to Stationary Target at Moving Array . . . . . . . . . 7-4
Detection
8
Neyman-Pearson Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Purpose of Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8-2
Support for Neyman-Pearson Hypothesis Testing . . . . . . . . . . . . . . . . . . . 8-2
Threshold for Real-Valued Signal in White Gaussian Noise . . . . . . . . . . . . 8-2
Threshold for Two Pulses of Real-Valued Signal in White Gaussian Noise
...................................................... 8-4
Threshold for Complex-Valued Signals in Complex White Gaussian Noise
...................................................... 8-4
vii
Support for Range-Doppler Processing . . . . . . . . . . . . . . . . . . . . . . . . . . 8-29
Range-Speed Response Pattern of Target . . . . . . . . . . . . . . . . . . . . . . . . 8-30
viii Contents
Coordinate Systems and Motion Modeling
10
Rectangular Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Definitions of Coordinates . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-2
Notation for Vectors and Points . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Orthogonal Basis and Euclidean Norm . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Orientation of Coordinate Axes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-3
Rotations and Rotation Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10-4
ix
Using Polarization
11
Polarized Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Introduction to Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-2
Linear and Circular Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-3
Elliptic Polarization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-6
Linear and Circular Polarization Bases . . . . . . . . . . . . . . . . . . . . . . . . . . 11-9
Sources of Polarized Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-12
Scattering Cross-Section Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11-18
Polarization Loss Due to Field and Receiver Mismatch . . . . . . . . . . . . . 11-22
Model Radar Transmitting Polarized Radiation . . . . . . . . . . . . . . . . . . . 11-24
Code Generation
14
Code Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
Code Generation Use and Benefits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-2
Limitations Specific to Phased Array System Toolbox . . . . . . . . . . . . . . . 14-3
General Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14-5
Limitations for System Objects that Require Dynamic Memory Allocation
..................................................... 14-9
x Contents
Functions and System Objects Supported for C/C++ Code Generation
........................................................ 14-14
Simulink Examples
15
Convert Azimuth and Elevation to Broadside Angle . . . . . . . . . . . . . . . . . 15-2
RF Propagation
16
Access TIREM Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16-2
Featured Examples
17
Antenna Array Analysis with Custom Radiation Pattern . . . . . . . . . . . . . 17-2
xi
Simultaneous Range and Speed Estimation Using MFSK Waveform . 17-143
xii Contents
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
....................................................... 17-379
xiii
Simulating a Polarimetric Radar Return for Weather Observation . . 17-584
Search and Track Scheduling for Multifunction Phased Array Radar 17-617
xiv Contents
Phased Arrays
15
1
• The operating frequency range of the antenna using the FrequencyRange property.
• Whether or not the response of the antenna is backbaffled at azimuth angles outside the interval
[–90,90] using the BackBaffled property.
You can determine the voltage response of the isotropic antenna element at specified frequencies and
angles by executing the System object.
fc = 1e9;
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[300e6 1e9],'BackBaffled',false);
pattern(antenna,fc,[-180:180],[-90:90],'CoordinateSystem','polar',...
'Type','power')
1-2
Isotropic Antenna Element
Using the antenna pattern method, plot the antenna response at zero degrees elevation for all
azimuth angles at 1 GHz.
pattern(antenna,1e9,[-180:180],0,'CoordinateSystem','rectangular',...
'Type','powerdb')
1-3
1 Antenna and Microphone Elements
Setting the BackBaffled property to true restricts the antenna response to azimuth angles in the
interval [-90,90] degrees. In this case, plot the antenna response in three dimensions.
antenna.BackBaffled = true;
pattern(antenna,fc,[-180:180],[-90:90],'CoordinateSystem','polar',...
'Type','power')
1-4
Isotropic Antenna Element
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[8e9 12e9],'BackBaffled',true);
respfreqs = [6:4:14]*1e9;
respazangles = -100:50:100;
anresp = antenna(respfreqs,respazangles)
anresp = 5×3
0 0 0
0 1 0
0 1 0
0 1 0
0 0 0
The antenna response in anresp is a matrix having row dimension equal to the number of azimuth
angles in respazangles and column dimension equal to the number of frequencies in respfreqs.
1-5
1 Antenna and Microphone Elements
The response voltage in the first and last columns of anresp are zero because those columns contain
the antenna response at 6 and 14 GHz, respectively. These frequencies lie outside the antenna
operating frequency range. Similarly, the first and last rows of anresp contain all zeros because
BackBaffled property is set to true. The first and last row contain the antenna response at azimuth
angles outside of [-90,90].
To obtain the antenna response at nonzero elevation angles, input the angles to the object as a 2-by-
M matrix where each column is an angle in the form [azimuth;elevation].
release(antenna)
respelangles = -90:45:90;
respangles = [respazangles; respelangles];
anresp = antenna(respfreqs,respangles)
anresp = 5×3
0 1 0
0 1 0
0 1 0
0 1 0
0 1 0
Notice that anresp(1,2) and anresp(5,2) represent the antenna voltage response at the azimuth-
elevation angle pairs (-100,-90) and (100,90) degrees. Although the azimuth angles lie in the baffled
region, because the elevation angles are equal to +/- 90 degrees, the responses are unity. In this case,
the resulting elevation cut degenerates to a point.
1-6
Cosine Antenna Element
The object returns the field response (also called field pattern)
In this expression
The response is defined for azimuth and elevation angles between –90° and 90°, inclusive, and is
always positive. There is no response at the backside of a cosine antenna. The cosine response
pattern achieves a maximum value of 1 at 0° azimuth and 0° elevation. Larger exponent values
narrow the response pattern of the element and increase the directivity.
The power response (or power pattern) is the squared value of the field response.
When you use the cosine antenna element, you specify the exponents of the cosine pattern using the
CosinePower property and the operating frequency range of the antenna using the
FrequencyRange property.
theta = -90:.01:90;
costh1 = cosd(theta);
costh2 = costh1.^2;
plot(theta,costh1)
hold on
plot(theta,costh2,'r')
hold off
1-7
1 Antenna and Microphone Elements
sCos = phased.CosineAntennaElement(...
'FrequencyRange',[1 10]*1e9,'CosinePower',[2 2]);
pattern(sCos,5e9,[-180:180],[-90:90],'CoordinateSystem',...
'Polar','Type','powerdb')
1-8
Cosine Antenna Element
1-9
1 Antenna and Microphone Elements
Tip You can import a radiation pattern that uses u/v coordinates or φ/θ angles, instead of azimuth/
elevation angles. To use such a pattern with phased.CustomAntennaElement, first convert your
pattern to azimuth/elevation form. Use uv2azelpat or phitheta2azelpat to do the conversion.
For an example, see Antenna Array Analysis with Custom Radiation Pattern.
For your custom antenna element, the antenna response depends on the frequency response and
radiation pattern. Specifically, the frequency and spatial responses are interpolated separately using
nearest-neighbor interpolation and then multiplied together to produce the total response. To avoid
interpolation errors, the range of azimuth angles should include +/– 180 degrees and the range of
elevation angles should include +/– 90 degrees.
1-10
Custom Antenna Element
Calculate the antenna response at the azimuth-elevation pairs (-30,0) and (-45,0) at 500 MHz.
0.7071
1.0000
The following code illustrates how nearest-neighbor interpolation is used to find the antenna voltage
response in the two directions. The total response is the product of the angular response and the
frequency response.
g = interp2(deg2rad(antenna.AzimuthAngles),...
deg2rad(antenna.ElevationAngles),...
db2mag(antenna.MagnitudePattern),...
deg2rad(ang(1,:))', deg2rad(ang(2,:))','nearest',0);
h = interp1(antenna.FrequencyVector,...
db2mag(antenna.FrequencyResponse),500e6,'nearest',0);
antresp = h.*g;
disp(mag2db(antresp))
-3.0103
0
1-11
1 Antenna and Microphone Elements
Omnidirectional Microphone
In this section...
“Support for Omnidirectional Microphones” on page 1-12
“Backbaffled Omnidirectional Microphone” on page 1-12
• The operating frequency range of the microphone using the FrequencyRange property.
• Whether the response of the microphone is baffled at azimuth angles outside the interval [–90,90]
degrees using the BackBaffled property.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
freq = 1e3;
microphone = phased.OmnidirectionalMicrophoneElement(...
'BackBaffled',true,'FrequencyRange',[20 20e3]);
pattern(microphone,freq,[-180:180],[-90:90],'CoordinateSystem','polar','Type','power');
1-12
Omnidirectional Microphone
In many applications, you sometimes need to examine the microphone directionality, or polar pattern.
To obtain an azimuth cut, set the elevation argument of the pattern method to a single angle such
as zero.
pattern(microphone,freq,[-180:180],0,'CoordinateSystem','polar','Type','power');
1-13
1 Antenna and Microphone Elements
To obtain an elevation cut, set the azimuth argument of the pattern method to a single angle such
as zero.
pattern(microphone,freq,0,[-90:90],'CoordinateSystem','polar','Type','power');
1-14
Omnidirectional Microphone
Obtain the microphone magnitude response at the specified azimuth angles and frequencies. By
default, when the ang argument is a single row, the elevation angles are 0 degrees. Note the
response is unity at all azimuth angles and frequencies, as expected.
freqs = [100:250:1e3];
ang = [-90:30:90];
response = microphone(freqs,ang)
response = 7×4
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1 1 1 1
1-15
1 Antenna and Microphone Elements
• Frequencies where you specify your response using the FrequencyVector property.
• Response corresponding to the specified frequencies using the FrequencyResponse property.
• Frequencies and angles at which the microphone’s polar pattern is measured.
• Magnitude response of the microphone.
sCustMic = phased.CustomMicrophoneElement;
sCustMic.PolarPatternFrequencies = [500 1000];
sCustMic.PolarPattern = mag2db([...
0.5+0.5*cosd(sCustMic.PolarPatternAngles);...
0.6+0.4*cosd(sCustMic.PolarPatternAngles)]);
pattern(sCustMic,[500,800],[-180:180],0,'Type','powerdb')
1-16
Custom Microphone Element
See Also
Related Examples
• “Microphone ULA Array” on page 2-10
1-17
1 Antenna and Microphone Elements
The simplest polarized antenna is the dipole antenna which consist of a split length of wire coupled at
the middle to a coaxial cable. The simplest dipole, from a mathematical perspective, is the Hertzian
dipole, in which the length of wire is much shorter than a wavelength. A diagram of the short dipole
antenna of length L appears in the next figure. This antenna is fed by a coaxial feed which splits into
two equal length wires of length L/2. The current, I, moves along the z-axis and is assumed to be the
same at all points in the wire.
Er = 0
EH = 0
iZ0IL e−ikr
EV = − cosel
2λ r
The next example computes the vertical and horizontal polarization components of the field. The
vertical component is a function of elevation angle and is axially symmetric. The horizontal
component vanishes everywhere.
1-18
Short-dipole Antenna Element
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
antenna = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[1,2]*1e9,'AxisDirection','Z');
Compute the antenna response. Because the elevation angle argument to antenna is restricted to
±90°, compute the responses for 0° azimuth and then for 180° azimuth. Combine the two responses
in the plot. The operating frequency of the antenna is 1.5 GHz.
el = [-90:90];
az = zeros(size(el));
fc = 1.5e9;
resp = antenna(fc,[az;el]);
az = 180.0*ones(size(el));
resp1 = antenna(fc,[az;el]);
figure(1)
subplot(121)
polar(el*pi/180.0,abs(resp.V.'),'b')
hold on
polar((el+180)*pi/180.0,abs(resp1.V.'),'b')
str = sprintf('%s\n%s','Vertical Polarization','vs Elevation Angle');
title(str)
hold off
subplot(122)
polar(el*pi/180.0,abs(resp.H.'),'b')
hold on
polar((el+180)*pi/180.0,abs(resp1.H.'),'b')
str = sprintf('%s\n%s','Horizontal Polarization','vs Elevation Angle');
title(str)
hold off
1-19
1 Antenna and Microphone Elements
1-20
Crossed-dipole Antenna Element
You can use a cross-dipole antenna to generate circularly-polarized radiation. The crossed-dipole
antenna consists of two identical but orthogonal short-dipole antennas that are phased 90° apart. A
diagram of the crossed dipole antenna appears in the following figure. The electric field created by a
crossed-dipole antenna constructed from a y-directed short dipole and a z-directed short dipole has
the form
Er = 0
iZ0IL e−ikr
EH = − cosaz
2λ r
iZ0IL e−ikr
EV = (sinelsinaz + icosel)
2λ r
The polarization ratio EV/EH, when evaluated along the x-axis, is just –i which means that the
polarization is exactly RHCP along the x-axis. It is predominantly RHCP when the observation point is
close to the x-axis. Moving away from the x-axis, the field becomes a mixture of LHCP and RHCP
polarizations. Along the –x-axis, the field is LHCP polarized. The figure illustrates, for a point near the
x, that the field is primarily RHCP.
1-21
1 Antenna and Microphone Elements
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1.5e9;
antenna = phased.CrossedDipoleAntennaElement('FrequencyRange',[1,2]*1e9);
Compute the left-handed and right-handed circular polarization components from the antenna
response.
az = [-180:180];
el = zeros(size(az));
resp = antenna(fc,[az;el]);
cfv = pol2circpol([resp.H.';resp.V.']);
clhp = cfv(1,:);
crhp = cfv(2,:);
polar(az*pi/180.0,abs(clhp))
hold on
polar(az*pi/180.0,abs(crhp))
title('LHCP and RHCP vs Azithmuth Angle')
legend('LHCP','RHCP')
hold off
1-22
Crossed-dipole Antenna Element
1-23
1 Antenna and Microphone Elements
When you use an Antenna Toolbox™ antenna in a Phased Array System Toolbox™ System Object™,
the antenna response will be normalized by the maximum value of the antenna output over all
directions. The maximum value is obtained by finding the maximum of the antenna pattern sampled
every five degrees in azimuth and elevation.
Start by creating a uniform linear array (ULA) of crossed-dipole antennas from Phased Array System
Toolbox. Crossed-dipole antennas are used to produce circularly-polarized signals. In this case, set
the operating frequency to 2 GHZ and draw the power pattern. Use the pattern method of the
phased.CrossedDipoleAntennaElement System object™.
fc = 2.0e9;
crosseddipoleantenna = phased.CrossedDipoleAntennaElement('FrequencyRange',[500,2500]*1e6);
pattern(crosseddipoleantenna,fc,[-180:180],0,...
'Type','powerdb')
1-24
Using Antenna Toolbox with Phased Array Systems
Then, create an 11-element ULA array of crossed-dipole antennas. Specify the element spacing to be
0.4 wavelengths. Taper the array using a Taylor window. Then, draw the array pattern as a function of
azimuth at 0 degrees elevation. Use the pattern method of the phased.ULA System object.
c = physconst('LightSpeed');
elemspacing = 0.4*c/fc;
nElements = 11;
array1 = phased.ULA('Element',crosseddipoleantenna,'NumElements',nElements,...
'ElementSpacing',elemspacing,'Taper',taylorwin(nElements)');
pattern(array1,fc,[-180:180],0,'PropagationSpeed',c,...
'Type','powerdb')
1-25
1 Antenna and Microphone Elements
Next, create a uniform linear array (ULA) using the helix antenna from Antenna Toolbox. Helix
antennas also produce circularly polarized radiation. Helix antennas are created using the helix
function.
First, specify a 4-turn helix antenna having a 28.0 mm radius and 1.2 mm width. The TiltAxis and
Tilt properties let you orient the antenna with respect to the local coordinate system. In this
example, orient the main response axis (MRA) along the x -axis to coincide with the MRA of the cross-
dipole main axis. By default, the MRA of the antenna points in the z -direction. Rotate the MRA
around the y -axis by 90 degrees.
radius = 0.028;
width = 1.2e-3;
nturns = 4;
helixantenna = helix('Radius',radius,'Width',width,'Turns',nturns,...
'TiltAxis',[0,1,0],'Tilt',90);
You can view the shape of the helix antenna use the show function from Antenna Toolbox.
show(helixantenna)
1-26
Using Antenna Toolbox with Phased Array Systems
Then, draw the azimuth antenna pattern at 0 degrees elevation at the operating frequency of 2 GHz.
Use the pattern function from Antenna Toolbox.
pattern(helixantenna,fc,[-180:180],0,...
'Type','powerdb')
1-27
1 Antenna and Microphone Elements
Next, construct an 11-element tapered uniform linear array of helix antennas with elements spaced at
0.4 wavelengths. Taper the array with a Taylor window. You can use the same phased.ULA System
object from Phased Array System Toolbox to create this array.
array2 = phased.ULA('Element',helixantenna,'NumElements',nElements,...
'ElementSpacing',elemspacing,'Taper',taylorwin(nElements)');
Plot the array pattern as a function of azimuth using the ULA pattern method which has the same
syntax as the Antenna Toolbox pattern function.
pattern(array2,fc,[-180:180],0,'PropagationSpeed',c,...
'Type','powerdb')
1-28
Using Antenna Toolbox with Phased Array Systems
Compare Patterns
Comparing the two array patterns shows that they are similar along the mainlobe. The backlobe of
the helix antenna array pattern is almost 15 dB smaller than that of the crossed-dipole array. This is
due to the presence of the ground plane of the helix antenna which reduces backlobe transmission.
1-29
2
array = phased.ULA('NumElements',4,'ElementSpacing',0.5);
viewArray(array);
2-2
Uniform Linear Array
You can return the coordinates of the array sensor elements in the form [x;y;z] by using the
getElementPosition method. See “Rectangular Coordinates” on page 10-2 for toolbox
conventions.
sensorpos = getElementPosition(array);
sensorpos is a 3-by-4 matrix with each column representing the position of a sensor element. Note
that the y-axis is the array axis. The positive x-axis is the array look direction (0 degrees broadside).
The elements are symmetric with the respect to the phase center of the array.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
2-3
2 Array Geometries and Analysis
Specify isotopic antennas for the array elements. Then, specify a 4-element ULA. Obtain the response
by executing the System object™.
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[3e8 1e9]);
array = phased.ULA('NumElements',4,'ElementSpacing',0.5,...
'Element',antenna);
freq = 1e9;
azangles = -180:180;
response = array(freq,azangles);
response is a 4-by-361 matrix where each column contains the responses at each azimuth angle.
Matrix rows correspond to the four elements. Because the elements of the ULA are isotropic
antennas, response is a matrix of ones.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Construct 4-element ULA using value-only syntax and compute the delay for a signal incident on the
array from -90° azimuth and 0° elevation. Delay units are in seconds.
array = phased.ULA(4);
delay = phased.ElementDelay('SensorArray',array);
tau = delay([-90;0])
tau = 4×1
10-8 ×
-0.2502
-0.0834
0.0834
0.2502
tau is a 4-by-1 vector of delays with respect to the phase center of the array, which is the origin of
the local coordinate system (0;0;0). See “Global and Local Coordinate Systems” on page 10-17 for a
description of global and local coordinate systems. Negative delays indicate that the signal arrives at
an element before reaching the phase center of the array. Because the waveform arrives from an
azimuth angle of -90°, the signal arrives at the first and second elements of the ULA before it reaches
the phase center resulting in negative delays for these elements.
If the signal is incident on the array at 0° broadside from a far-field source, the signal illuminates all
elements of the array simultaneously resulting in zero delay.
tau = delay([0;0])
2-4
Uniform Linear Array
tau = 4×1
0
0
0
0
If the incident signal is an acoustic pressure waveform propagating at the speed of sound, you can
calculate the element delays by setting the PropagationSpeed property to 340 m/s. This value is a
typical speed of sound at sea level.
delay = phased.ElementDelay('SensorArray',array,...
'PropagationSpeed',340);
tau = delay([90;0])
tau = 4×1
0.0022
0.0007
-0.0007
-0.0022
Steering Vector
The steering vector represents the relative phase shifts for the incident far-field waveform across the
array elements. You can determine these phase shifts with the phased.SteeringVector object.
For a single carrier frequency, the steering vector for a ULA consisting of N elements is:
− j2πf τ1
e
− j2πf τ2
e
− j2πf τ3
e
.
.
.
− j2πf τN
e
where τn denotes the time delay relative to the array phase center at the n-th array element.
Compute the steering vector for a 4-element ULA at an operating frequency of 1 GHz. Assume that
the waveform is incident on the array from 45° azimuth and 10° elevation.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1e9;
array = phased.ULA(4);
2-5
2 Array Geometries and Analysis
steervec = phased.SteeringVector('SensorArray',array);
sv = steervec(fc,[45;10])
sv = 4×1 complex
-0.0495 + 0.9988i
-0.8742 + 0.4856i
-0.8742 - 0.4856i
-0.0495 - 0.9988i
You can also compute the steering vector with the following equivalent code.
delay = phased.ElementDelay('SensorArray',array);
tau = delay([45;10]);
exp(-1i*2*pi*fc*tau)
-0.0495 + 0.9988i
-0.8742 + 0.4856i
-0.8742 - 0.4856i
-0.0495 - 0.9988i
Array Response
To obtain the array response, which is a weighted-combination of the steering vector elements for
each incident angle, use the phased.ArrayResponse System object.
Construct a four-element ULA with elements spaced at 0.25 m. Obtain the array magnitude response
(absolute value of the complex-valued array response) for azimuth angles (-180:180) at 1 GHz. Then,
plot the normalized magnitude response in dB.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1e9;
array = phased.ULA('NumElements',4,'ElementSpacing',0.25);
azangles = -180:180;
response = phased.ArrayResponse('SensorArray',array);
resp = abs(response(fc,azangles));
plot(azangles,mag2db((resp/max(resp))))
grid on
title('Azimuth Cut at Zero Degrees Elevation')
xlabel('Azimuth Angle (degrees)')
2-6
Uniform Linear Array
Visualize the array response using the pattern method. Create a 3-D plot of the response in UV
space; other plotting options are available.
pattern(array,fc,[-1:.01:1],[-1:.01:1],'CoordinateSystem','uv',...
'PropagationSpeed',physconst('Lightspeed'))
2-7
2 Array Geometries and Analysis
The collectPlaneWave method modulates input signals by the element of the steering vector
corresponding to an array element. Stated differently, collectPlaneWave accounts for phase shifts
across elements in the array based on the angle of arrival. However, collectPlaneWave does not
account for the response of individual elements in the array.
Simulate the reception of a 100-Hz sine wave modulated by a carrier frequency of 1 GHz at a 4-
element ULA. Assume the angle of arrival of the signal is (-90;0).
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
array = phased.ULA(4);
t = unigrid(0,0.001,0.01,'[)');
x = cos(2*pi*100*t)';
y = collectPlaneWave(array,x,[-90;0],1e9,physconst('LightSpeed'));
2-8
Uniform Linear Array
steervec = phased.SteeringVector('SensorArray',array);
sv = steervec(1e9,[-90;0]);
y1 = x*sv.';
See Also
Related Examples
• “Microphone ULA Array” on page 2-10
2-9
2 Array Geometries and Analysis
Create a microphone element with a cardioid response pattern. Use the default values of the
FrequencyVector property.
Plot the polar pattern of the microphone at 0.5 kHz and 1 kHz.
pattern(microphone,freq,[-180:180],0,'CoordinateSystem','polar','Type','powerdb',...
'Normalize',true);
array = phased.ULA('NumElements',4,'ElementSpacing',0.5,...
'Element',microphone);
2-10
Microphone ULA Array
pattern(array,freq,[-180:180],0,'CoordinateSystem','polar','Type','powerdb',...
'Normalize',true,'PropagationSpeed',340.0);
2-11
2 Array Geometries and Analysis
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create and view a six-element URA with two elements along the y-axis and three elements along the
z-axis. Use a rectangular lattice, with the default spacing of 0.5 meters along both the row and
column dimensions of the array. Each element is an isotropic antenna element, which is the default
element type for a URA.
fc = 1e9;
array = phased.URA([3,2]);
viewArray(array)
2-12
Uniform Rectangular Array
pos = getElementPosition(array);
Calculate the element delays for signals arriving from +45° and -45° azimuth and 0° elevation.
delay = phased.ElementDelay('SensorArray',array);
ang = [45,-45];
tau = delay(ang);
The first column of tau contains the element delays for the signal incident on the array from +45°
azimuth. The second column contains the delays for the signal arriving from -45°. The delays are
equal in magnitude but opposite in sign, as expected.
The following code simulates the reception of two sinusoidal waves arriving from far field sources.
One signal is a 100-Hz sine wave arriving from 20° azimuth and 10° elevation. The second signal is a
300-Hz sine wave arriving from -30° azimuth and 5° elevation.
t = linspace(0,1,1000);
x1 = cos(2*pi*100*t)';
x2 = cos(2*pi*300*t)';
ang1 = [20;10];
ang2 = [-30;5];
recsig = collectPlaneWave(array,[x1 x2],[ang1 ang2],fc);
2-13
2 Array Geometries and Analysis
Each column of recsig represents the received signals at the corresponding element of the URA.
You can plot the array response using the pattern method.
pattern(array,fc,[-180:180],[-90:90],'PropagationSpeed',physconst('LightSpeed'),...
'CoordinateSystem','rectangular','Type','powerdb')
2-14
Conformal Array
Conformal Array
In this section...
“Support for Arrays with Custom Geometry” on page 2-15
“Create Default Conformal Array” on page 2-15
“Uniform Circular Array Created from Conformal Array” on page 2-15
“Custom Antenna Array” on page 2-18
When you use phased.ConformalArray, you must specify these aspects of the array:
array = phased.ConformalArray
array =
phased.ConformalArray with properties:
2-15
2 Array Geometries and Analysis
other array shape. Assume an operating frequency of 400 MHz. Tune the array by specifying the
arclength between the elements to be 0.5λ where λ is the wavelength corresponding to the operating
frequency. Array elements lie in the x-y-plane. Element normal directions are set to (ϕn, 0) where ϕn is
the azimuth angle of the nth array element.
Set the number of elements and the operating frequency of the array.
N = 60;
fc = 400e6;
theta = 360/N;
thetarad = deg2rad(theta);
arclength = 0.5*(physconst('LightSpeed')/fc);
radius = arclength/thetarad;
∘ ∘
Compute the element azimuth angles. Azimuth angles must lie in the range ( − 180 , 180 ).
ang = (0:N-1)*theta;
ang(ang >= 180.0) = ang(ang >= 180.0) - 360.0;
array = phased.ConformalArray;
array.ElementPosition = [radius*cosd(ang);...
radius*sind(ang);...
zeros(1,N)];
array.ElementNormal = [ang;zeros(1,N)];
viewArray(array)
2-16
Conformal Array
pattern(array,1e9,[-180:180],0,'PropagationSpeed',physconst('LightSpeed'),...
'CoordinateSystem','polar','Type','powerdb','Normalize',true)
2-17
2 Array Geometries and Analysis
Define the custom antenna element and plot its radiation pattern.
az = -180:180;
el = -90:90;
fc = 3e8;
elresp = cosd(el);
antenna = phased.CustomAntennaElement('AzimuthAngles',az,...
'ElevationAngles',el,...
'MagnitudePattern',repmat(elresp',1,numel(az)));
pattern(antenna,3e8,0,el,'CoordinateSystem','polar','Type','powerdb',...
'Normalize',true);
2-18
Conformal Array
Define the locations and normal directions of the elements. All elements lie in the z-plane. The
elements are located at (1;0;0) , (0;1;0), and (0;-1;0) meters. The element normal azimuth angles are
0°, 120°, and -120°, respectively. All normal elevation angles are 0°.
xpos = [1 0 0];
ypos = [0 1 -1];
zpos = [0 0 0];
normal_az = [0 120 -120];
normal_el = [0 0 0];
array = phased.ConformalArray('Element',antenna,...
'ElementPosition',[xpos; ypos; zpos],...
'ElementNormal',[normal_az; normal_el]);
viewArray(array,'ShowNormals',true)
view(0,90)
2-19
2 Array Geometries and Analysis
pattern(array,fc,az,el,'CoordinateSystem','polar','Type','powerdb',...
'Normalize',true,'PropagationSpeed',physconst('LightSpeed'))
2-20
Conformal Array
2-21
2 Array Geometries and Analysis
Definition of Subarrays
In Phased Array System Toolbox software, a subarray is an accessible subset of array elements. When
you use an array that contains subarrays, you can access measurements from the subarrays but not
from the individual elements. Similarly, you can perform processing at the subarray level but not at
the level of the individual elements. As a result, the system has fewer degrees of freedom than if you
controlled the system at the level of the individual elements.
• Define one subarray, and then build a larger array by arranging copies of the subarray. The
subarray can be a ULA, URA, or conformal array. The copies are identical, except for their location
and orientation. You can arrange the copies spatially in a grid or a custom layout.
When you use this approach, you build the large array by creating a
phased.ReplicatedSubarray System object. This object stores information about the subarray
and how the copies of it are arranged to form the larger array.
• Define an array, and then partition it into subarrays. The array can be a ULA, URA, or conformal
array. The subarrays do not need to be identical. A given array element can be in more than one
subarray, leading to overlapped subarrays.
When you use this approach, you partition your array by creating a phased.PartitionedArray
System object. This object stores information about the array and its subarray structure.
2-22
Subarrays Within Arrays
• phased.AngleDopplerResponse
• phased.ArrayGain
• phased.ArrayResponse
• phased.Collector
• phased.ConstantGammaClutter
• phased.MVDRBeamformer
• phased.PhaseShiftBeamformer
• phased.Radiator
• phased.STAPSMIBeamformer
• phased.SteeringVector
• phased.SubbandPhaseShiftBeamformer
• phased.WidebandCollector
Plot the positions of the array elements in the yz-plane (all x-coordinates are zero.) Include labels that
indicate the numbering of the elements. The numbering is important for selecting which elements are
included in each subarray.
viewArray(array,'ShowIndex','All')
2-23
2 Array Geometries and Analysis
Create and view an array consisting of three 2-element linear subarrays each parallel to the z-axis.
Use the indices from the plot to form the matrix for the SubarraySelection property. The
getSubarrayPosition method returns the phase centers of the three subarrays.
subarray1 = [1 1 0 0 0 0; 0 0 1 1 0 0; 0 0 0 0 1 1];
partitionedarray1 = phased.PartitionedArray('Array',array,...
'SubarraySelection',subarray1);
viewArray(partitionedarray1)
2-24
Subarrays Within Arrays
subarray1pos = getSubarrayPosition(partitionedarray1)
subarray1pos = 3×3
0 0 0
-0.5000 0 0.5000
0 0 0
Create and view another array consisting of two 3-element linear subarrays parallel to the y-axis.
Using the getSubarrayPosition method, find the phase centers of the two subarrays.
subarray2 = [0 1 0 1 0 1; 1 0 1 0 1 0];
partitionedarray2 = phased.PartitionedArray('Array',array,...
'SubarraySelection',subarray2);
viewArray(partitionedarray2)
2-25
2 Array Geometries and Analysis
subarraypos2 = getSubarrayPosition(partitionedarray2)
subarraypos2 = 3×2
0 0
0 0
-0.2500 0.2500
array = phased.ULA('NumElements',4);
Plot the positions of the array elements (filled circles) and the phase centers of the subarrays (unfilled
circles). The elements lie in the yz-plane because all the x-coordinates are zero.
viewArray(replsubarray);
hold on;
2-26
Subarrays Within Arrays
subarraypos = getSubarrayPosition(replsubarray);
sx = subarraypos(1,:);
sy = subarraypos(2,:);
sz = subarraypos(3,:);
scatter3(sx,sy,sz,'ro')
hold off
antenna = phased.CosineAntennaElement('CosinePower',1);
array = phased.ULA('NumElements',4,'Element',antenna);
Create a larger array by arranging three copies of the linear array. Define the phase centers and
normal directions of the three copies explicitly.
2-27
2 Array Geometries and Analysis
'Layout','Custom',...
'SubarrayPosition',subarray_pos,...
'SubarrayNormal',[120 0;-120 0;0 0].');
Plot the positions of the array elements (blue) and the phase centers (red) of the subarrays. The plot
is in the xy-plane because all the z-coordinates are zero.
viewArray(replsubarray,'ShowSubarray',[])
hold on
scatter3(subarray_pos(1,:),subarray_pos(2,:),...
subarray_pos(3,:),'ro')
hold off
See Also
Related Examples
• “Subarrays in Phased Array Antennas” on page 17-110
2-28
Plot Array Directivity Using Sensor Array Analyzer App
When you type sensorArrayAnalyzer from the command line or select the app from the App
Toolstrip, an interactive window opens. The default window shows the geometry of a 4-element
uniform linear array. You can then select various options to analyze different arrays, other element
types, geometry, and directivity.
As an example, use the app to create a 4-by-4 uniform rectangular array of cosine antenna elements
and then show the array directivity. Space the elements 0.4 wavelengths apart.
2-29
2 Array Geometries and Analysis
2-30
3
Signal Radiation
In this section...
“Support for Modeling Signal Radiation” on page 3-2
“Radiate Signal with Uniform Linear Array” on page 3-2
To radiate a signal from a sensor array, use phased.Radiator. When you use this object, you must
specify these aspects of the radiator:
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
array = phased.ULA('NumElements',2,'ElementSpacing',0.5);
radiator = phased.Radiator('Sensor',array,...
'OperatingFrequency',300e6,...
'PropagationSpeed',physconst('LightSpeed'),...
'CombineRadiatedSignals',true);
Create a signal to radiate and propagate to the far field at an angle of (45°,0°) .
x = [1 -1 1 -1]';
y = radiator(x,[45;0]);
The far field signal results from multiplying the signal by the array pattern. The array pattern is the
product of the array element pattern and the array factor. For a uniform linear array, the array factor
is the superposition of elements in the steering vector phased.SteeringVector.
The following code produces an identical far-field signal by explicitly using the array factor.
array = phased.ULA('NumElements',2,'ElementSpacing',0.5);
steervec = phased.SteeringVector('SensorArray',array,...
3-2
Signal Radiation
'IncludeElementResponse',true);
sv = steervec(300e6,[45;0]);
y1 = x*sum(sv);
Compare y1 to y.
disp(y1-y)
0
0
0
0
3-3
3 Signal Radiation and Collection
Signal Collection
In this section...
“Support for Modeling Signal Collection” on page 3-4
“Narrowband Collector for Uniform Linear Array” on page 3-5
“Narrowband Collector for a Single Antenna Element” on page 3-6
“Wideband Signal Collection” on page 3-7
In many array processing applications, the ratio of the signal’s bandwidth to the carrier frequency is
small. Expressed as a percentage, this ratio does not exceed a few percent. Examples include radar
applications where a pulse waveform is modulated by a carrier frequency in the microwave range.
These are narrowband signals. For narrowband signals, you can express the steering vector as a
function of a single frequency, the carrier frequency. For narrowband signals, the
phased.Collector object is appropriate.
In other applications, the narrowband assumption is not justified. In many acoustic and sonar
applications, the wave impinging on the array is a pressure wave that is unmodulated. It is not
possible to express the steering vector as a function of a single frequency. In these cases, the subband
approach implemented in phased.WidebandCollector is appropriate. The wideband collector
decomposes the input into subbands and computes the steering vector for each subband.
When you use the narrowband collector, phased.Collector, you must specify these aspects of the
collector:
When you use the wideband collector, phased.WidebandCollector, you must specify these aspects
of the collector:
• Carrier frequency
• Whether the signal is demodulated to the baseband
3-4
Signal Collection
y = 4×2 complex
In the preceding case, the collector object multiplies the input signal, x, by the corresponding
element of the steering vector for the two-element ULA. The following code produces the response in
an equivalent manner. First, create the ULA and then create the steering vector. Compare with the
previous result.
array = phased.ULA('NumElements',2,'ElementSpacing',0.5);
steeringvec = phased.SteeringVector('SensorArray',array);
sv = steeringvec(3e8,[45;0]);
x =[1 -1 1 -1]';
y1 = x*sv.'
y1 = 4×2 complex
3-5
3 Signal Radiation and Collection
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1e9;
antenna = phased.CustomAntennaElement;
antenna.AzimuthAngles = -180:180;
antenna.ElevationAngles = -90:90;
antenna.MagnitudePattern = mag2db(...
repmat(cosd(antenna.ElevationAngles)',1,numel(antenna.AzimuthAngles)));
resp = antenna(fc,[0;45])
resp = 0.7071
pattern(antenna,fc,0,[-90:90],'Type','powerdb')
The antenna voltage response at 0° azimuth and 45° elevation is cos(45°) as expected.
3-6
Signal Collection
Assume a narrowband sinusoidal input incident on the antenna element from 0° azimuth and 45°
elevation. Determine the signal collected at the element.
collector = phased.Collector('Sensor',antenna,'OperatingFrequency',fc);
x =[1 -1 1 -1]';
y = collector(x,[0;45])
y = 4×1
0.7071
-0.7071
0.7071
-0.7071
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
x = randn(10,1);
c = 340.0;
microphone = phased.OmnidirectionalMicrophoneElement(...
'FrequencyRange',[20 20e3],'BackBaffled',true);
collector = phased.WidebandCollector('Sensor',microphone,...
'PropagationSpeed',c,'SampleRate',50e3,...
'ModulatedInput',false);
y = collector(x,[30;10]);
3-7
4
1 0≤t≤τ
a(t) =
0 otherwise
x(t) = a(t)sin(ωct)
where ωc denotes the carrier frequency. Note that a(t) represents an on-off rectangular amplitude
modulation of the carrier frequency. After demodulation, the complex envelope of x(t) is the real-
valued rectangular pulse a(t) of duration τ seconds.
• Sampling rate
• Pulse duration
• Pulse repetition frequency
• Number of samples or pulses in each vector that represents the waveform
Construct a rectangular pulse waveform with a duration of 50 μs, a sample rate of 1 MHz, and a pulse
repetition frequency (PRF) of 10 kHz.
waveform = phased.RectangularWaveform('SampleRate',1e6,...
'PulseWidth',50e-6,'PRF',10e3);
Plot a single rectangular pulse by calling plot directly on the rectangular waveform variable. plot is
a method of phased.RectangularWaveform. This method produces an annotated graph of your
pulse waveform.
plot(waveform)
4-2
Rectangular Pulse Waveforms
bw = 20000
The bandwidth, bw, of a rectangular pulse in hertz is approximately the reciprocal of the pulse
duration 1/sRect.PulseWidth.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create a rectangular pulse with a duration of 100 μs and a PRF of 1 kHz. Set the number of pulses in
the output equal to two.
waveform = phased.RectangularWaveform('PulseWidth',100e-6,...
'PRF',1e3,'OutputFormat','Pulses','NumPulses',2);
Make a copy of your rectangular pulse and change the pulse width in your original waveform to 10
μs.
4-3
4 Waveforms, Transmitter, and Receiver
waveform2 = clone(waveform);
waveform.PulseWidth = 10e-6;
sRect and sRect1 now specify different rectangular pulses because you changed the pulse width of
waveform.
Execute the System objects to return two pulses of your rectangular pulse waveforms.
y = waveform();
y2 = waveform2();
totaldur = 2*1/waveform.PRF;
totnumsamp = totaldur*waveform.SampleRate;
t = unigrid(0,1/waveform.SampleRate,totaldur,'[)');
subplot(2,1,1)
plot(t.*1000,real(y))
axis([0 totaldur*1e3 0 1.5])
title('Two 10-\musec duration pulses (PRF = 1 kHz)')
set(gca,'XTick',0:0.2:totaldur*1e3)
subplot(2,1,2)
plot(t.*1000,real(y2))
axis([0 totaldur*1e3 0 1.5])
xlabel('Milliseconds')
title('Two 100-\musec duration pulses (PRF = 1 kHz)')
set(gca,'XTick',0:0.2:totaldur*1e3)
4-4
Rectangular Pulse Waveforms
4-5
4 Waveforms, Transmitter, and Receiver
For a rectangular pulse, the duration of the transmitted pulse and the processed echo are effectively
the same. Therefore, the range resolution of the radar and the target detection capability are coupled
in an inverse relationship.
Pulse compression techniques enable you to decouple the duration of the pulse from its energy by
effectively creating different durations for the transmitted pulse and processed echo. Using a linear
frequency modulated pulse waveform is a popular choice for pulse compression.
1 dΘ(t) β
= t
2π dt τ
The complex envelope of a linear FM pulse waveform with decreasing instantaneous frequency is:
2 − 2τt)
x(t) = a(t)e− jπβ/τ (t
4-6
Linear Frequency Modulated Pulse Waveforms
• Sample rate
• Duration of a single pulse
• Pulse repetition frequency
• Sweep bandwidth
• Sweep direction (up or down), corresponding to increasing and decreasing instantaneous
frequency
• Envelope, which describes the amplitude modulation of the pulse waveform. The envelope can be
rectangular or Gaussian.
1 0≤t≤τ
a(t) =
0 otherwise
• The Gaussian envelope is:
2 /τ2
a(t) = e−t t≥0
• Number of samples or pulses in each vector that represents the waveform
Create a linear FM pulse with a sample rate of 1 MHz, a pulse duration of 50 μs with an increasing
instantaneous frequency, and a sweep bandwidth of 100 kHz. The pulse repetition frequency is 10
kHz and the amplitude modulation is rectangular.
waveform = phased.LinearFMWaveform('SampleRate',1e6,...
'PulseWidth',50e-6,'PRF',10e3,...
'SweepBandwidth',100e3,'SweepDirection','Up',...
'Envelope','Rectangular',...
'OutputFormat','Pulses','NumPulses',1);
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
waveform = phased.LinearFMWaveform('PulseWidth',100e-6,...
'SweepBandwidth',200e3,'PRF',4e3);
disp(waveform.PulseWidth*waveform.SweepBandwidth)
4-7
4 Waveforms, Transmitter, and Receiver
20
plot(waveform)
Use the step method to obtain one full repetition interval of the signal. Plot the real and imaginary
parts.
y = waveform();
t = unigrid(0,1/waveform.SampleRate,1/waveform.PRF,'[)');
figure
subplot(2,1,1)
plot(t,real(y))
axis tight
title('Real Part')
subplot(2,1,2)
plot(t,imag(y))
xlabel('Time (s)')
title('Imaginary Part')
axis tight
4-8
Linear Frequency Modulated Pulse Waveforms
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create a 3-D surface plot of the ambiguity function for the waveform.
[afmag_lfm,delay_lfm,doppler_lfm] = ambgfun(wav,...
waveform.SampleRate,waveform.PRF);
surf(delay_lfm*1e6,doppler_lfm/1e3,afmag_lfm,...
'LineStyle','none')
axis tight
grid on
view([140,35])
4-9
4 Waveforms, Transmitter, and Receiver
colorbar
xlabel('Delay \tau (\mus)')
ylabel('Doppler f_d (kHz)')
title('Linear FM Pulse Waveform Ambiguity Function')
The surface has a narrow ridge that is slightly tilted. The tilt indicates better resolution in the zero
delay cut.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create a rectangular waveform and a linear FM pulse waveform having the same duration and PRF.
Generate samples of each waveform.
rectwaveform = phased.RectangularWaveform('PRF',20e3);
lfmwaveform = phased.LinearFMWaveform('PRF',20e3);
xrect = rectwaveform();
xlfm = lfmwaveform();
4-10
Linear Frequency Modulated Pulse Waveforms
[ambrect,delay] = ambgfun(xrect,rectwaveform.SampleRate,rectwaveform.PRF,...
'Cut','Doppler');
ambfm = ambgfun(xlfm,lfmwaveform.SampleRate,lfmwaveform.PRF,...
'Cut','Doppler');
subplot(211)
stem(delay,ambrect)
title('Autocorrelation of Rectangular Pulse')
axis([-5e-5 5e-5 0 1])
set(gca,'XTick',1e-5*(-5:5))
subplot(212)
stem(delay,ambfm)
xlabel('Delay (seconds)')
title('Autocorrelation of Linear FM Pulse')
axis([-5e-5 5e-5 0 1])
set(gca,'XTick',1e-5*(-5:5))
4-11
4 Waveforms, Transmitter, and Receiver
See Also
Related Examples
• “Waveform Analysis Using the Ambiguity Function” on page 17-149
4-12
Stepped FM Pulse Waveforms
Similar to linear FM pulse waveforms, stepped frequency waveforms are a popular pulse compression
technique. Using this approach enables you to increase the range resolution of the radar without
sacrificing target detection capability.
The stepped frequency pulse waveform has the following modifiable properties:
waveform = phased.SteppedFMWaveform('SampleRate',1e6,...
'PulseWidth',50e-6,'PRF',10e3,...
'FrequencyStep',20e3,'NumSteps',5);
Use the bandwidth method to show that the bandwidth of the stepped FM pulse waveform equals
the product of the frequency step size and the number of steps.
bandwidth(waveform)
ans =
100000
Because the OutputFormat property is set to 'Pulses' and the NumPulses property is set to one,
executing the System object returns one pulse repetition interval (PRI). The pulse duration within
that interval is set by the PulseWidth property. The signal in the remainder of the PRI consists of
zeros.
The frequency of the initial pulse is zero Hz (DC). Each time you execute the System object, the
frequency of the narrowband pulse increments by the value of the FrequencyStep property. If you
4-13
4 Waveforms, Transmitter, and Receiver
execute the System object more times than the value of the NumSteps property, the process repeats,
starting over with the DC pulse.
Execute the System object to return successively higher frequency pulses. Plot the pulses one by one
in the same figure window. Pause the loop to visualize the increment in frequency with each execution
of the System object. Execute the System object one more time than the number of pulses to
demonstrate that the process starts over with the DC pulse.
This figure shows the pulse plot for the last iteration of the loop.
t = unigrid(0,1/waveform.SampleRate,1/waveform.PRF,'[)');
for i = 1:waveform.NumSteps
plot(t,real(waveform()))
pause(0.5)
axis tight
end
plot(t,waveform())
4-14
Stepped FM Pulse Waveforms
4-15
4 Waveforms, Transmitter, and Receiver
FMCW Waveforms
In this section...
“Benefits of Using FMCW Waveform” on page 4-16
“How to Create FMCW Waveforms” on page 4-16
“Double Triangular Sweep” on page 4-16
FMCW waveforms are common in automotive radar systems and ground-penetrating radar systems.
• Sample rate.
• Period and bandwidth of the FM sweep. These quantities can cycle through multiple values during
your simulation.
Tip To find targets up to a given maximum range, r, you can typically use a sweep period of
approximately 5*range2time(r) or 6*range2time(r). To achieve a range resolution of
delta_r, use a bandwidth of at least range2bw(delta_r).
• Sweep shape. This shape can be sawtooth (up or down) or triangular.
Tip For moving targets, you can use a triangular sweep to resolve ambiguity between range and
Doppler.
phased.FMCWWaveform assumes that all frequency modulations are linear. For triangular sweeps,
the slope of the down sweep is the opposite of the slope of the up sweep.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create an FMCW waveform object for which the SweepTime and SweepBandwidth properties are
vectors of length two. For each period, the waveform alternates between the pairs of corresponding
sweep time and bandwidth values.
4-16
FMCW Waveforms
st = [1e-3 1.1e-3];
bw = [1e5 9e4];
waveform = phased.FMCWWaveform('SweepTime',st,...
'SweepBandwidth',bw,'SweepDirection','Triangle',...
'SweepInterval','Symmetric','SampleRate',2e5,...
'NumSweeps',4);
Compute samples from four sweeps (two periods). In a triangular sweep, each period consists of an
up sweep and down sweep.
x = waveform();
[S,F,T] = spectrogram(x,32,16,32,waveform.SampleRate);
image(T,fftshift(F),fftshift(mag2db(abs(S))))
xlabel('Time (sec)')
ylabel('Frequency (Hz)')
4-17
4 Waveforms, Transmitter, and Receiver
Phase-Coded Waveforms
In this section...
“When to Use Phase-Coded Waveforms” on page 4-18
“How to Create Phase-Coded Waveforms” on page 4-18
Conversely, you might use another waveform instead of a phase-coded waveform in the following
situations:
Phase-coded waveforms tend to perform poorly when signals have Doppler shifts.
• When the hardware requirements for phase-coded waveforms are prohibitively expensive
After you create a phased.PhaseCodedWaveform object, you can plot the waveform using the plot
method of this class. You can also generate samples of the waveform using the step method.
For a full list of properties and methods, see the phased.PhaseCodedWaveform reference page.
4-18
Basic Radar Using Phase-Coded Waveform
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
waveform = phased.PhaseCodedWaveform('Code','Frank','NumChips',4,...
'ChipWidth',1e-6,'PRF',5e3,'OutputFormat','Pulses',...
'NumPulses',1);
Then, redefine the pulse width, tau , using the properties of the new waveform.
tau = waveform.ChipWidth*waveform.NumChips;
The remainder of the code is almost identical to the code in the original examples and is presented
here without comments. For a detailed explanation of how the code works, see the original “End-to-
End Radar System” example.
Pd = 0.9;
Pfa = 1e-6;
numpulses = 10;
SNR = albersheim(Pd,Pfa,10);
maxrange = 1.5e4;
lambda = physconst('LightSpeed')/4e9;
Pt = radareqpow(lambda,maxrange,SNR,tau,'RCS',0.5,'Gain',20);
transmitter = phased.Transmitter('PeakPower',50e3,'Gain',20,...
'LossFactor',0,'InUseOutputPort',true,...
'CoherentOnTransmit',true);
radiator = phased.Radiator('Sensor',antenna,...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9);
collector = phased.Collector('Sensor',antenna,...
'PropagationSpeed',physconst('LightSpeed'),...
'Wavefront','Plane','OperatingFrequency',4e9);
receiver = phased.ReceiverPreamp('Gain',20,'NoiseFigure',2,...
'ReferenceTemperature',290,'SampleRate',1e6,...
'EnableInputPort',true,'SeedSource','Property','Seed',1e3);
channel = phased.FreeSpace(...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9,'TwoWayPropagation',false,...
'SampleRate',1e6);
4-19
4 Waveforms, Transmitter, and Receiver
T = 1/waveform.PRF;
txpos = transmitterplatform.InitialPosition;
rxsig = zeros(waveform.SampleRate*T,numpulses);
for n = 1:numpulses
[tgtpos,tgtvel] = targetplatform(T);
[tgtrng,tgtang] = rangeangle(tgtpos,txpos);
sig = waveform();
[sig,txstatus] = transmitter(sig);
sig = radiator(sig,tgtang);
sig = channel(sig,txpos,tgtpos,[0;0;0],tgtvel);
sig = target(sig);
sig = channel(sig,tgtpos,txpos,tgtvel,[0;0;0]);
sig = collector(sig,tgtang);
rxsig(:,n) = receiver(sig,~txstatus);
end
rxsig = pulsint(rxsig,'noncoherent');
t = unigrid(0,1/receiver.SampleRate,T,'[)');
rangegates = (physconst('LightSpeed')*t)/2;
plot(rangegates,rxsig)
hold on
xlabel('Meters'); ylabel('Power')
ylim = get(gca,'YLim');
plot([tgtrng,tgtrng],[0 ylim(2)],'r')
hold off
4-20
Waveforms with Staggered PRFs
• Removal of Doppler ambiguities, or blind speeds, where Doppler frequencies that are multiples of
the PRF are aliased to zero
• Mitigation of the effects of jamming
To implement a staggered PRF, configure your waveform object using a vector instead of a scalar for
the PRF property value.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
4-21
4 Waveforms, Transmitter, and Receiver
4-22
Plot Spectrogram Using Radar Waveform Analyzer App
When you type radarWaveformAnalyzer from the command line or select the app from the App
Toolstrip, an interactive window opens. The default window shows a rectangular waveform and its
spectrum. You can then select various options to analyze different waveforms.
radarWaveformAnalyzer
As an example, use the app to show the spectrogram of a continuous FMCW waveform.
4-23
4 Waveforms, Transmitter, and Receiver
Then, you will see a plot of the spectrogram of the signal similar to this.
4-24
Transmitter
Transmitter
In this section...
“Transmitter Object” on page 4-25
“Phase Noise” on page 4-27
Transmitter Object
The phased.Transmitter object lets you model key components of the radar equation including
the peak transmit power, the transmit gain, and a system loss factor. You can use
phased.Transmitter together with radareqpow, radareqrng, and radareqsnr, to relate the
received echo power to your transmitter specifications.
While the preceding functionality is important in applications dependent on amplitude such as signal
detectability, Doppler processing depends on the phase of the complex envelope. In order to
accurately estimate the radial velocity of moving targets, it is important that the radar operates in
either a fully coherent or pseudo-coherent mode. In the fully coherent, or coherent on transmit,
mode, the phase of the transmitted pulses is constant. Constant phase provides you with a reference
to detect Doppler shifts.
A transmitter that applies a random phase to each pulse creates phase noise that can obscure
Doppler shifts. If the components of the radar do not enable you to maintain constant phase, you can
create a pseudo-coherent, or coherent on receive radar by keeping a record of the random phase
errors introduced by the transmitter. The receiver can correct for these errors by modulation of the
complex envelope. The phased.Transmitter object enables you to model both coherent on
transmit and coherent on receive behavior.
4-25
4 Waveforms, Transmitter, and Receiver
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Construct a transmitter with a peak transmit power of 1000 watts, a transmit gain of 20 decibels
(dB), and a loss factor of 0 dB. Set the InUseOutPutPort property to true to record the transmitter
status. Pulse waveform values are scaled based on the peak transmit power and the ratio of the
transmitter gain to loss factor.
transmitter = phased.Transmitter('PeakPower',1e3,'Gain',20,...
'LossFactor',0,'InUseOutputPort',true)
transmitter =
phased.Transmitter with properties:
PeakPower: 1000
Gain: 20
LossFactor: 0
InUseOutputPort: true
CoherentOnTransmit: true
Construct a linear FM pulse waveform for transmission. Use a 100 μsec linear FM pulse having a
bandwidth of 200 kHz. Use the default sweep direction and sample rate. Set the pulse repetition
frequency (PRF) to 2 kHz. Obtain one pulse by setting the NumPulses property of the
phased.LinearFMWaveform object to unity.
waveform = phased.LinearFMWaveform('PulseWidth',100e-6,'PRF',2e3,...
'SweepBandwidth',200e3,'OutputFormat','Pulses','NumPulses',1);
Generate the pulse by executing the phased.LinearFMWaveform waveform System object™. Then,
transmit the pulse by executing the phased.Transmitter System object.
wf = waveform();
[txoutput,txstatus] = transmitter(wf);
t = unigrid(0,1/waveform.SampleRate,1/waveform.PRF,'[)');
subplot(211)
plot(t,real(txoutput))
axis tight
grid on
ylabel('Amplitude')
title('Transmitter Output (real part) - One PRI')
subplot(212)
plot(t,txstatus)
axis([0 t(end) 0 1.5])
xlabel('Seconds')
grid on
ylabel('Off-On Status')
set(gca,'ytick',[0 1])
title('Transmitter Status')
4-26
Transmitter
Phase Noise
To model a coherent on receive radar, you can set the CoherentOnTransmit property to false and
the PhaseNoiseOutputPort property to true. You can output the random phase added to each
sample when you execute the System object.
This example illustrates adding phase noise to a rectangular pulse waveform having five pulses. A
random phase is added to each sample of the waveform. Compute the phase of the output waveform
and compare the phase to the phase noise returned when executing the System object™.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
For convenience, set the gain of the transmitter to 0 dB, the peak power to 1 W, and seed the random
number generator to ensure reproducible results.
waveform = phased.RectangularWaveform('NumPulses',5);
transmitter = phased.Transmitter('CoherentOnTransmit',false,...
'PhaseNoiseOutputPort',true,'Gain',0,'PeakPower',1,...
'SeedSource','Property','Seed',1000);
wf = waveform();
[txtoutput,phnoise] = transmitter(wf);
4-27
4 Waveforms, Transmitter, and Receiver
phdeg = rad2deg(phnoise);
phdeg(phdeg>180)= phdeg(phdeg>180) - 360;
plot(wf)
title('Input Waveform')
axis([0 length(wf) 0 1.5])
ylabel('Amplitude')
grid on
subplot(2,1,1)
plot(rad2deg(atan2(imag(txtoutput),real(txtoutput))))
title('Phase of the Output')
ylabel('Degrees')
axis([0 length(wf) -180 180])
grid on
subplot(2,1,2)
plot(phdeg)
title('Phase Noise'); ylabel('Degrees')
axis([0 length(wf) -180 180])
grid on
4-28
Transmitter
The first figure shows the waveform. The phase of each pulse at the input to the transmitter is zero.
In the second figure, the top plot shows the phase of the transmitter output waveform. The bottom
plot shows the phase added to each sample. Focus on the first 100 samples. The pulse waveform is
equal to 1 for samples 1-50 and 0 for samples 51-100. The added random phase is a constant -124.7°
for samples 1-100, but this affects the output only when the pulse waveform is nonzero. In the output
waveform, you see that the output waveform has a phase of -124.7° for samples 1-50 and 0 for
samples 51-100. Examining the transmitter output and phase noise for samples where the input
waveform is nonzero, you can see that the phase output the System object and the phase of the
transmitter output agree.
4-29
4 Waveforms, Transmitter, and Receiver
Receiver Preamp
In this section...
“Operation of Receiver Preamp” on page 4-30
“Configuring Receiver Preamp” on page 4-30
“Model Receiver Effects on Sinusoidal Input” on page 4-31
• EnableInputPort — A logical property that enables you to specify when the receiver is on or off.
Input the actual status of the receiver as a vector to step. This property is useful when modeling
a monostatic radar system. In a monostatic radar, it is important to ensure the transmitter and
receiver are not operating simultaneously. See phased.Transmitter and “Transmitter” on page
4-25.
• Gain — Gain in dB (GdB)
• LossFactor — Loss factor in dB (LdB)
• NoiseMethod — Specify noise input as noise power or noise temperature
• NoiseFigure — Receiver noise figure in dB (FdB)
• ReferenceTemperature — Receiver reference temperature in kelvin (T)
• SampleRate — Sample rate (fs)
• NoisePower — Noise power specified in Watts (σ2)
• NoiseComplexity — Specify noise as real-valued or complex-valued
• EnableInputPort — Add input to specify when the receiver is active
• PhaseNoiseInputPort — Add input to specify phase noise for coherent on receive receiver
• SeedSource — Lets you specify random number generator seed
• Seed — Random number generator seed
The output signal, y[n], of the phased.ReceiverPreamp System object equals the input signal
scaled by the ratio of receiver amplitude gain to amplitude loss plus additive noise
G σ
y[n] = x[n] + w[n]
L 2
where x[n] is the complex-valued input signal and w[n] is unit-variance noise complex-valued noise.
When the input signal is real-valued, the output signal, y[n], equals the real-valued input signal scaled
by the ratio of receiver amplitude gain to amplitude loss plus real-valued additive noise
4-30
Receiver Preamp
G
y[n] = x[n] + σw[n]
L
The amplitude gain, G, and loss, L, can be express in terms of the input dB parameters by
GdB /20
G = 10
LdB /20
L = 10
respectively.
The additive noise for the receiver is modeled as a zero-mean complex white Gaussian noise vector
with variance, σ2, equal to the noise power. The real and imaginary parts of the noise vector each
have variance equal to 1/2 the noise power.
You can set the noise power directly by choosing the NoiseMethod property to be 'Noise power'
and then setting the NoisePower property to a real positive number. Alternatively, you can set the
noise power using the system temperature by choosing the NoiseMethod property to be 'Noise
temperature'. Then
σ2 = kBBTF
where kB is Boltzmann’s constant, B is the noise bandwidth which is equal to the sample rate, fs, T is
the system temperature, and F is the noise figure in power units.
The noise figure, F, is a dimensionless quantity that indicates how much a receiver deviates from an
ideal receiver in terms of internal noise. An ideal receiver produces thermal noise power defined by
noise bandwidth and temperature. In terms of power units, the noise figure F = 10FdB/10. A noise figure
of 0 dB indicates that the noise power of a receiver equals the noise power of an ideal receiver.
Because an actual receiver cannot exhibit a noise power value less than an ideal receiver, the noise
figure is always greater than or equal to one. In decibels, the noise figure must be greater than or
equal to zero.
To model the effect of the receiver preamp on the signal, phased.ReceiverPreamp computes the
effective system noise temperature by taking the product of the reference temperature, T, and the
noise figure F in power units. See systemp for details.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
receiver = phased.ReceiverPreamp('Gain',20,...
'NoiseFigure',5,'ReferenceTemperature',290,...
'SampleRate',1e6,'SeedSource','Property','Seed',1e3);
4-31
4 Waveforms, Transmitter, and Receiver
Assume a 100-Hz sine wave input with an amplitude of 1 microvolt. Because the Phased Array System
Toolbox assumes that all modeling is done at baseband, use a complex exponential as the input when
executing the System object.
t = unigrid(0,0.001,0.1,'[)');
x = 1e-6*exp(1i*2*pi*100*t).';
y = receiver(x);
Now show how the same output can be produced from the multiplicative amplitude gain and additive
noise. First assume that the noise bandwidth equals the sample rate of the receiver preamp (1 MHz).
Then, the noise power is equal to:
NoiseBandwidth = receiver.SampleRate;
noisepow = physconst('Boltzmann')*...
systemp(receiver.NoiseFigure,receiver.ReferenceTemperature)*NoiseBandwidth;
The noise power is the variance of the additive white noise. To determine the correct amplitude
scaling of the input signal, note that the gain is 20 dB. Because the loss factor in this case is 0 dB, the
scaling factor for the input signal is found by solving the following equation for the multiplicative gain
G from the gain in dB, GdB:
(GdB /20)
G = 10
G = 10^(receiver.Gain/20)
G = 10
The gain is 10. By scaling the input signal by a factor of ten and adding complex white Gaussian noise
with the appropriate variance, you produce an output equivalent to the preceding call to
phased.ReceiverPreamp.step (use the same seed for the random number generation).
rng(1e3);
y1 = G*x + sqrt(noisepow/2)*(randn(size(x))+1j*randn(size(x)));
disp(y1(1:10) - y(1:10))
0
0
0
0
0
0
0
0
0
0
4-32
Model Coherent-on-Receive Behavior
In a coherent-on-receive radar, the receiver corrects for the phase noise introduced at the transmitter
by using the record of those phase errors. By setting the PhaseNoiseInputPort property to true,
you can input a record of the transmitter phase errors as an argument when executing the
phased.ReceiverPreamp System object.
Use the PhaseNoiseOutputPort and InUseOutputPort properties of the transmitter to record the
phase noise and the status of the transmitter.
Delay the output of the transmitter using the delayseq function to simulate the waveform arriving at
the receiver preamp when the transmitter is inactive and the receiver is active.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
waveform = phased.RectangularWaveform('NumPulses',5);
transmitter = phased.Transmitter('CoherentOnTransmit',false,...
'PhaseNoiseOutputPort',true,'Gain',0,'PeakPower',1,...
'SeedSource','Property','Seed',1000,'InUseOutputPort',true);
wf = waveform();
[troutput,trstatus,phasenoise] = transmitter(wf);
troutput = delayseq(troutput,waveform.PulseWidth,...
waveform.SampleRate);
receiver = phased.ReceiverPreamp('Gain',0,...
'PhaseNoiseInputPort',true,'EnableInputPort',true);
y = receiver(troutput,~trstatus,phasenoise);
subplot(2,1,1)
plot(real(troutput))
title('Delayed Transmitter Output with Phase Noise')
ylabel('Amplitude')
subplot(2,1,2)
plot(real(y))
xlabel('Samples')
4-33
4 Waveforms, Transmitter, and Receiver
ylabel('Amplitude')
title('Received Signal with Phase Correction')
4-34
Radar Equation
Radar Equation
In this section...
“Radar Equation Theory” on page 4-35
“Link Budget Calculation Using the Radar Equation” on page 4-36
“Maximum Detectable Range for a Monostatic Radar” on page 4-36
“Output SNR at Receiver in Bistatic Radar” on page 4-37
The equation for the power at the input to the receiver represents the signal term in the signal-to-
noise (SNR) ratio. To model the noise term, assume the thermal noise in the receiver has a white
noise power spectral density (PSD) given by:
P(f ) = kT
where k is the Boltzmann constant and T is the effective noise temperature. The receiver acts as a
filter to shape the white noise PSD. Assume that the magnitude squared receiver frequency response
approximates a rectangular filter with bandwidth equal to the reciprocal of the pulse duration, 1/τ.
The total noise power at the output of the receiver is:
kTFn
N=
τ
The product of the effective noise temperature and the receiver noise factor is referred to as the
system temperature and is denoted by Ts, so that Ts = TFn .
4-35
4 Waveforms, Transmitter, and Receiver
Using the equation for the received signal power and the output noise power, the receiver output
SNR is:
2
Pr Ptτ GtGr λ σ
=
N 3
(4π) kTsRt2Rr2L
The preceding equations are implemented in the Phased Array System Toolbox by the functions:
radareqpow, radareqrng, and radareqsnr. These functions and the equations on which they are
based are valuable tools in radar system design and analysis.
Use Albersheim's equation to determine the required SNR for the specified detection and false-alarm
probabilities
Pd = 0.9;
Pfa = 1e-6;
NumPulses = 10;
SNR = albersheim(Pd,Pfa,10)
SNR = 4.9904
The required SNR is approximately 5 dB. Use the function radareqpow to determine the required
peak transmit power in watts.
tgtrng = 30e3;
fc = 5e9;
c = physconst('Lightspeed');
lambda = c/fc;
RCS = 1;
pulsedur = 1e-6;
G = 30;
Pt = radareqpow(lambda,tgtrng,SNR,pulsedur,'rcs',RCS,'gain',G)
Pt = 5.6485e+03
4-36
Radar Equation
nonfluctuating RCS of 0 . 5m2 if the radar has a peak transmit power of 1 MW. Assume the transmitter
gain is 40 dB and the radar transmits a pulse that is 0.5μs in duration.
tau = 0.5e-6;
G = 40;
RCS = 0.5;
Pt = 1e6;
lambda = 3e8/1e9;
SNR = 13;
maxrng = radareqrng(lambda,SNR,Pt,tau,'rcs',RCS,'gain',G)
maxrng = 3.4516e+05
fc = 10e9;
lambda = physconst('LightSpeed')/10e9;
tau = 1e-6;
Pt = 1e6;
TxRvRng =[50e3 75e3];
Gain = [40 20];
snr = radareqsnr(lambda,TxRvRng,Pt,tau,'Gain',Gain)
snr = 9.0547
4-37
4 Waveforms, Transmitter, and Receiver
freq = 100e6;
ant_height = 20;
rng_fs = 100;
[vcp, vcpangles] = radarvcd(freq,rng_fs,ant_height);
blakechart(vcp, vcpangles);
4-38
Compute Peak Power Using Radar Equation Calculator App
When you type radarEquationCalculator from the command line or select the app from the App
Toolstrip, an interactive window opens. The default window shows a calculation of target range from
SNR, power, and other parameters. You can then select various options to compute different radar
parameters.
radarEquationCalculator
4-39
4 Waveforms, Transmitter, and Receiver
As an example, use the app to compute the required peak transmit power for a monostatic radar to
detect a large target at 100 km. The radar operates at 10 GHz with a 40 dB antenna gain. Set the
probability of detection to 0.9 and the probability of false alarm to 0.0001.
1 From the Calculation Type drop-down list, choose Peak Transmit Power
2 Set the Wavelength to 3 cm
3 Specify the Pulse Width as 2 microseconds
4 Assume total System Losses of 5 dB
5 Assuming the target is a large airplane, set Target Radar Cross Section value to 100 m2
6 Choose Configuration as Monostatic
7 Set the Gain to be 40 dB
8 Open the SNR box
9 Specify the Probability of Detections as 0.9
10 Specify the Probability of False Alarm as 0.0001
Close the App window. Normally, you close the App using the close button.
You can see from this previously prepared screen shot that the required peak transmit power is .2095
W.
im = imread('radarEquationExample_03.png');
figure('Position',[344 206 849 644])
image(im)
axis off
set(gca,'Position',[0.083 0.083 0.834 0.888])
4-40
Compute Peak Power Using Radar Equation Calculator App
4-41
5
Beamforming
Beamforming Overview
In this section...
“Conventional Beamforming” on page 5-2
“Optimal and Adaptive Beamforming” on page 5-3
Beamforming is the spatial equivalent of frequency filtering and can be grouped into two classes:
data independent (conventional) and data-dependent (adaptive). All beamformers are designed to
emphasize signals coming from some directions and suppress signals and noise arriving from other
directions.
Phased Array System Toolbox provides nine different beamformers. This table summarizes the main
properties of the beamformers.
Conventional Beamforming
Conventional beamforming, also called classical beamforming, is the easiest to understand.
Conventional beamforming techniques include delay-and-sum beamforming, phase-shift
beamforming, subband beamforming, and filter-and-sum beamforming. These beamformers are
similar because the weights and parameters that define the beampattern are fixed and do not depend
on the array input data. The weights are chosen to produce a specified array response to the signals
and interference in the environment. A signal arriving at an array has different times of arrival at
each sensor. For example, plane waves arriving at a linear array have a time delay that is a linear
function of distance along the array. Delay-and-sum beamforming compensates for these delays by
applying a reverse delay to each sensor. If the time delay is accurately computed, the signals from
each sensor add constructively.
5-2
Beamforming Overview
Finding the compensating delay at each sensor requires accurate knowledge of the sensor locations
and signal direction. The delay-and-sum beamformer can be implemented in the frequency domain or
in the time domain. When the signal is narrowband, time delay becomes a phase shift in the
frequency domain and is implement by multiplying each sensor signal by a frequency-dependent
compensatory phase shift. This algorithm is implemented in the phased.PhaseShiftBeamformer.
For broadband signals, there are several approaches. One approach is to delay the signal in time by a
discrete number of samples. A problem with this method is that the degree of resolution that you can
distinguish is determined by the sampling rate of your data, because you cannot resolve delay
differences less than the sampling interval. Because this technique only works if the sampling rate is
high, you must increase the sampling frequency well beyond the Nyquist frequency so that the true
delay is very close to a sample time. A second method interpolates the signal between samples. Time
delay beamforming is implemented in phased.TimeDelayBeamformer. A third method Fourier
transforms the signals to the frequency domain, applies a linear phase shift, and converts the signal
back into the time domain. Phase-shift beamforming is performed at each frequency band (see
phased.SubbandPhaseShiftBeamformer).
Beamforming is not limited to plane waves but can be applied even when there is wavefront
curvature. In this case, the source lies in the near field. Perhaps the term beamforming is no longer
appropriate. You can use the source-array geometry to compute the phase shift for each point in
space and then apply this phase shift at each sensor element.
2 2
w′s w′a
= A2
w′Rnw w′Rnw
where s represents the signal values at the sensors, a represents the source steering vector, and A2
represents the source power at the array. Rn is the noise+interference covariance matrix. Because
the SNR is invariant under any scale factor applied to the weights, an equivalent formulation of this
criterion is to minimize the noise output w'R.nw subject to a constraint
w′Rnw s.t. w′a = 1
Rn−1a
wopt =
a′Rn−1a
and yields the minimum variance distortionless response (MVDR) beamformer. Because of the
constraint, beamformer preserves the desired signal while minimizing contributions to the array
output due to noise and interference. The MVDR beamformer is implemented in
5-3
5 Beamforming
• The beamformer incorporates the noise and interference into an optimal solution.
• The beamformer has higher spatial resolution than a conventional beamformer.
• The beamformer puts nulls in the direction of any interference sources.
• Sidelobes are smaller and smoother.
There are two major disadvantages to the MVDR beamformer. The MVDR beamformer is sensitive to
errors in either the array parameters or arrival direction. The MVDR beamformer is susceptible to
self-nulling. In addition, trying to use MVDR as an adaptive beamformer requires a matrix inversion
every time the noise and interference statistics change. When there are many array elements, the
inversion can be computationally expensive.
In practical applications, an accurate steering vector and an accurate covariance matrix are not
always available. Generally, all that is available is the sampled covariance matrix. This deficiency can
lead to both inadequate interference suppression and distortion of the desired signal. In this case, the
true signal direction is slightly off from the beam pointing direction. Then the actual signal is treated
as interference.
However it often turns out that the noise is not separable from the signal and it is impossible to
determine Rn. In that case, you can estimate a sample covariance matrix from the data.
K
1
K k∑
Rx = x(k)x′(k)
=1
and minimizes w'Rxw instead. Minimizing this quantity leads to the minimum power distortionless
response (MPDR) beamformer. If the data vector, x, contains the signal and the estimated data
covariance matrix is perfect and the steering vector of the desired signal is known exactly, the MPDR
beamformer is equivalent to the MVDR beamformer. However, MPDR degrades more severely when
Rx is estimated from insufficient data or the signal arrival vector is not known precisely.
Rewrite the direction constraint in the form a’w = 1 by transposing both sides. This equivalent form
suggests that it possible to include multiple constraints by using a matrix constraint Cw = d where C
is now a constraint matrix and d represents the signal gains due to the constraints. This is the form
used in the linear constraint minimum variance (LCMV) beamformer. The LCMV beamformer is a
generalization of MVDR beamforming and is implemented in phased.LCMVBeamformer and
phased.TimeDelayLCMVBeamformer. There are several different approaches to specifying
constraints such as amplitude and derivative constraints. You can, for example, specify weights that
suppress interfering signals arriving from a particular direction while passing signals from a different
direction without distortion. The optimal LCMV weights are determined by the equation
−1
wopt = Rn−1C′(CRn−1C′) d
The advantages and disadvantages of the MVDR beamformer also apply to the LCMV beamformer.
While MVDR and LCMV are adaptive in principle, re-computation of the weights requires the
inversion of a potentially large covariance matrix when the array has many elements. The Frost and
generalized sidelobe cancelers are reformulations of LCMV that convert the constrained optimization
into minimizing an unconstrained form and then compute the weights recursively. This approach
5-4
Beamforming Overview
5-5
5 Beamforming
Conventional Beamforming
In this section...
“Uses for Beamformers” on page 5-6
“Support for Conventional Beamforming” on page 5-6
“Narrowband Phase Shift Beamformer For a ULA” on page 5-6
• Sensor array
• Signal propagation speed
• System operating frequency
• Beamforming direction
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1e9;
lambda = physconst('LightSpeed')/fc;
array = phased.ULA('NumElements',10,'ElementSpacing',lambda/2);
Simulate a test signal. For this example, use a simple rectangular pulse.
t = linspace(0,0.3,300)';
testsig = zeros(size(t));
testsig(201:205) = 1;
5-6
Conventional Beamforming
Assume the rectangular pulse is incident on the ULA from an angle of 30° azimuth and 0° elevation.
Use the collectPlaneWave function of the ULA System object to simulate reception of the pulse
waveform from the specified angle.
angle_of_arrival = [30;0];
x = collectPlaneWave(array,testsig,angle_of_arrival,fc);
The signal x is a matrix with ten columns. Each column represents the received signal at one of the
array elements.
Add complex-valued Gaussian noise to the signal x. Reset the default random number stream for
reproducible results. Plot the magnitudes of the received pulses at the first four elements of the ULA.
rng default
npower = 0.5;
x = x + sqrt(npower/2)*(randn(size(x)) + 1i*randn(size(x)));
subplot(221)
plot(t,abs(x(:,1)))
title('Element 1 (magnitude)')
axis tight
ylabel('Magnitude')
subplot(222)
plot(t,abs(x(:,2)))
title('Element 2 (magnitude)')
axis tight
ylabel('Magnitude')
subplot(223)
plot(t,abs(x(:,3)))
title('Element 3 (magnitude)')
axis tight
xlabel('Seconds')
ylabel('Magnitude')
subplot(224)
plot(t,abs(x(:,4)))
title('Element 4 (magnitude)')
axis tight
xlabel('Seconds')
ylabel('Magnitude')
5-7
5 Beamforming
Construct a phase-shift beamformer. Set the WeightsOutputPort property to true to output the
spatial filter weights that point the beamformer to the angle of arrival.
beamformer = phased.PhaseShiftBeamformer('SensorArray',array,...
'OperatingFrequency',1e9,'Direction',angle_of_arrival,...
'WeightsOutputPort',true);
Execute the phase shift beamformer to compute the beamformer output and to compute the applied
weights.
[y,w] = beamformer(x);
Plot the magnitude of the output waveform along with the noise-free original waveform for
comparison.
subplot(211)
plot(t,abs(testsig))
axis tight
title('Original Signal')
ylabel('Magnitude')
subplot(212)
plot(t,abs(y))
axis tight
title('Received Signal with Beamforming')
ylabel('Magnitude')
xlabel('Seconds')
5-8
Conventional Beamforming
To examine the effect of beamforming weights on the array response, plot the array normalized
power response with and without beamforming weights.
azang = -180:30:180;
subplot(211)
pattern(array,fc,[-180:180],0,'CoordinateSystem','rectangular',...
'Type','powerdb','PropagationSpeed',physconst('LightSpeed'))
set(gca,'xtick',azang);
title('Array Response without Beamforming Weights')
subplot(212)
pattern(array,fc,[-180:180],0,'CoordinateSystem','rectangular',...
'Type','powerdb','PropagationSpeed',physconst('LightSpeed'),...
'Weights',w)
set(gca,'xtick',azang);
title('Array Response with Beamforming Weights')
5-9
5 Beamforming
See Also
Related Examples
• “Conventional and Adaptive Beamformers” on page 17-188
5-10
Adaptive Beamforming
Adaptive Beamforming
In this section...
“Benefits of Adaptive Beamforming” on page 5-11
“Support for Adaptive Beamforming” on page 5-11
“Nulling with LCMV Beamformer” on page 5-11
By contrast, adaptive, or statistically optimum, beamformers can account for interference signals. An
adaptive beamformer algorithm chooses the weights based on the statistics of the received data. For
example, an adaptive beamformer can improve the SNR by using the received data to place nulls in
the array response. These nulls are placed at angles corresponding to the interference signals.
fc = 1e9;
lambda = physconst('LightSpeed')/fc;
array = phased.ULA('NumElements',10,'ElementSpacing',lambda/2);
array.Element.FrequencyRange = [8e8 1.2e9];
t = linspace(0,0.3,300)';
testsig = zeros(size(t));
testsig(201:205) = 1;
Assume the rectangular pulse is incident on the ULA from an angle of 30° azimuth and 0° elevation.
Use the collectPlaneWave function of the ULA System object to simulate reception of the pulse
waveform from the incident angle.
5-11
5 Beamforming
angle_of_arrival = [30;0];
x = collectPlaneWave(array,testsig,angle_of_arrival,fc);
The signal x is a matrix with ten columns. Each column represents the received signal at one of the
array elements.
Add complex-valued white Gaussian noise to the signal x. Set the default random number stream for
reproducible results.
rng default
npower = 0.5;
x = x + sqrt(npower/2)*(randn(size(x)) + 1i*randn(size(x)));
Create an interference source using the phased.BarrageJammer System object. Specify the
barrage jammer to have an effective radiated power of 10 W. The interference signal from the
barrage jammer is incident on the ULA from an angle of 120° azimuth and 0° elevation. Use the
collectPlaneWave function of the ULA System object to simulate reception of the jammer signal.
jammer = phased.BarrageJammer('ERP',10,'SamplesPerFrame',300);
jamsig = jammer();
jammer_angle = [120;0];
jamsig = collectPlaneWave(array,jamsig,jammer_angle,fc);
Add complex-valued white Gaussian noise to simulate noise contributions not directly associated with
the jamming signal. Again, set the default random number stream for reproducible results. This noise
power is 0 dB below the jammer power. Beamform the signal using a conventional beamformer.
noisePwr = 1e-5;
rng(2008);
noise = sqrt(noisePwr/2)*...
(randn(size(jamsig)) + 1j*randn(size(jamsig)));
jamsig = jamsig + noise;
rxsig = x + jamsig;
[yout,w] = convbeamformer(rxsig);
Implement the adaptive LCMV beamformer using the same ULA array. Use the target-free data,
jamsig, as training data. Output the beamformed signal and the beamformer weights.
steeringvector = phased.SteeringVector('SensorArray',array,...
'PropagationSpeed',physconst('LightSpeed'));
LCMVbeamformer = phased.LCMVBeamformer('DesiredResponse',1,...
'TrainingInputPort',true,'WeightsOutputPort',true);
LCMVbeamformer.Constraint = steeringvector(fc,angle_of_arrival);
LCMVbeamformer.DesiredResponse = 1;
[yLCMV,wLCMV] = LCMVbeamformer(rxsig,jamsig);
Plot the conventional beamformer output and the adaptive beamformer output.
subplot(211)
plot(t,abs(yout))
axis tight
5-12
Adaptive Beamforming
title('Conventional Beamformer')
ylabel('Magnitude')
subplot(212)
plot(t,abs(yLCMV))
axis tight
title('LCMV (Adaptive) Beamformer')
xlabel('Seconds')
ylabel('Magnitude')
The adaptive beamformer significantly improves the SNR of the rectangular pulse at 0.2 s.
Using conventional and LCMV weights, plot the responses for each beamformer.
subplot(211)
pattern(array,fc,[-180:180],0,'PropagationSpeed',physconst('LightSpeed'),...
'CoordinateSystem','rectangular','Type','powerdb','Normalize',true,...
'Weights',w)
title('Array Response with Conventional Beamforming Weights');
subplot(212)
pattern(array,fc,[-180:180],0,'PropagationSpeed',physconst('LightSpeed'),...)
'CoordinateSystem','rectangular','Type','powerdb','Normalize',true,...
'Weights',wLCMV)
title('Array Response with LCMV Beamforming Weights');
5-13
5 Beamforming
The adaptive beamform places a null at the arrival angle of the interference signal, 120°.
See Also
phased.FrostBeamformer | phased.LCMVBeamformer | phased.MVDRBeamformer
Related Examples
• “Conventional and Adaptive Beamformers” on page 17-188
5-14
Wideband Beamforming
Wideband Beamforming
In this section...
“Support for Wideband Beamforming” on page 5-15
“Time-Delay Beamforming of Microphone ULA Array” on page 5-15
“Visualization of Wideband Beamformer Performance” on page 5-16
Phased Array System Toolbox software provides conventional and adaptive wideband beamformers.
They include:
• phased.FrostBeamformer
• phased.SubbandPhaseShiftBeamformer
• phased.TimeDelayBeamformer
• phased.TimeDelayLCMVBeamformer
See “Acoustic Beamforming Using a Microphone Array” on page 17-172 for an example of using
wideband beamforming to extract speech signals in noise.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
c = 340;
t = linspace(0,1,50e3)';
sig = chirp(t,0,1,1000);
Collect the acoustic chirp with a ten-element ULA. Use omnidirectional microphone elements spaced
less than one-half the wavelength at the 50 kHz sampling frequency. The chirp is incident on the ULA
∘ ∘
with an angle of 60 azimuth and 0 elevation. Add random noise to the signal.
microphone = phased.OmnidirectionalMicrophoneElement(...
'FrequencyRange',[20 20e3]);
array = phased.ULA('Element',microphone,'NumElements',10,...
'ElementSpacing',0.01);
collector = phased.WidebandCollector('Sensor',array,'SampleRate',5e4,...
'PropagationSpeed',c,'ModulatedInput',false);
sigang = [60;0];
5-15
5 Beamforming
rsig = collector(sig,sigang);
rsig = rsig + 0.1*randn(size(rsig));
Apply a wideband conventional time-delay beamformer to improve the SNR of the received signal.
beamformer = phased.TimeDelayBeamformer('SensorArray',array,...
'SampleRate',5e4,'PropagationSpeed',c,'Direction',sigang);
y = beamformer(rsig);
subplot(2,1,1)
plot(t(1:5000),real(rsig(1:5e3,5)))
axis([0,t(5000),-0.5,1])
title('Signal (real part) at the 5th element of the ULA')
subplot(2,1,2)
plot(t(1:5000),real(y(1:5e3)))
axis([0,t(5000),-0.5,1])
title('Signal (real part) with time-delay beamforming')
xlabel('Seconds')
5-16
Wideband Beamforming
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create an 11-element uniform linear array (ULA) of microphones using cosine antenna elements as
microphones. The phased.CosineAntennaElement System object™ is general enough to be used
as a microphone element as well because it creates or receives a scalar field. You need to change the
response frequencies to the audible range. In addition make sure the PropagationSpeed parameter
in the array pattern methods are set to the speed of sound in air.
c = 340;
freq = [1000 2750];
fc = 2000;
numels = 11;
microphone = phased.CosineAntennaElement('FrequencyRange',freq);
array = phased.ULA('NumElements',numels,...
'ElementSpacing',0.5*c/fc,'Element',microphone);
Plot the response pattern of the microphone element over a set of frequencies.
plotFreq = linspace(min(freq),max(freq),15);
pattern(microphone,plotFreq,[-180:180],0,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb')
This plot shows that the element pattern is constant over the entire bandwidth.
Plot the response pattern of an 11-element array over the same set of frequencies.
5-17
5 Beamforming
pattern(array,plotFreq,[-180:180],0,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb','PropagationSpeed',c)
This plot shows that the element pattern mainlobe decreases with frequency.
Apply a subband phase shift beamformer to the array. The direction of interest is 30° azimuth and 0°
elevation. There are 8 subbands.
direction = [30;0];
numbands = 8;
beamformer = phased.SubbandPhaseShiftBeamformer('SensorArray',array,...
'Direction',direction,...
'OperatingFrequency',fc,'PropagationSpeed',c,...
'SampleRate',1e3,...
'WeightsOutputPort',true,'SubbandsOutputPort',true,...
'NumSubbands',numbands);
rx = ones(numbands,numels);
[y,w,centerfreqs] = beamformer(rx);
Plot the response pattern of the array using the weights and center frequencies from the beamformer.
pattern(array,centerfreqs.',[-180:180],0,'Weights',w,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb','PropagationSpeed',c)
5-18
Wideband Beamforming
The above plot shows the beamformed pattern at the center frequency of each subband.
centerfreqs = fftshift(centerfreqs);
w = fftshift(w,2);
idx = [1,5,8];
pattern(array,centerfreqs(idx).',[-180:180],0,'Weights',w(:,idx),'CoordinateSystem','rectangular'
'PlotStyle','overlay','Type','powerdb','PropagationSpeed',c)
legend('Location','South')
5-19
5 Beamforming
This plot shows that the main beam direction remains constant while the beamwidth decreases with
frequency.
5-20
Time-Delay Beamforming of Microphone ULA Array
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
c = 340;
t = linspace(0,1,50e3)';
sig = chirp(t,0,1,1000);
Collect the acoustic chirp with a ten-element ULA. Use omnidirectional microphone elements spaced
less than one-half the wavelength at the 50 kHz sampling frequency. The chirp is incident on the ULA
∘ ∘
with an angle of 60 azimuth and 0 elevation. Add random noise to the signal.
microphone = phased.OmnidirectionalMicrophoneElement(...
'FrequencyRange',[20 20e3]);
array = phased.ULA('Element',microphone,'NumElements',10,...
'ElementSpacing',0.01);
collector = phased.WidebandCollector('Sensor',array,'SampleRate',5e4,...
'PropagationSpeed',c,'ModulatedInput',false);
sigang = [60;0];
rsig = collector(sig,sigang);
rsig = rsig + 0.1*randn(size(rsig));
Apply a wideband conventional time-delay beamformer to improve the SNR of the received signal.
beamformer = phased.TimeDelayBeamformer('SensorArray',array,...
'SampleRate',5e4,'PropagationSpeed',c,'Direction',sigang);
y = beamformer(rsig);
subplot(2,1,1)
plot(t(1:5000),real(rsig(1:5e3,5)))
axis([0,t(5000),-0.5,1])
title('Signal (real part) at the 5th element of the ULA')
subplot(2,1,2)
plot(t(1:5000),real(y(1:5e3)))
axis([0,t(5000),-0.5,1])
title('Signal (real part) with time-delay beamforming')
xlabel('Seconds')
5-21
5 Beamforming
5-22
Visualization of Wideband Beamformer Performance
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create an 11-element uniform linear array (ULA) of microphones using cosine antenna elements as
microphones. The phased.CosineAntennaElement System object™ is general enough to be used
as a microphone element as well because it creates or receives a scalar field. You need to change the
response frequencies to the audible range. In addition make sure the PropagationSpeed parameter
in the array pattern methods are set to the speed of sound in air.
c = 340;
freq = [1000 2750];
fc = 2000;
numels = 11;
microphone = phased.CosineAntennaElement('FrequencyRange',freq);
array = phased.ULA('NumElements',numels,...
'ElementSpacing',0.5*c/fc,'Element',microphone);
Plot the response pattern of the microphone element over a set of frequencies.
plotFreq = linspace(min(freq),max(freq),15);
pattern(microphone,plotFreq,[-180:180],0,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb')
5-23
5 Beamforming
This plot shows that the element pattern is constant over the entire bandwidth.
Plot the response pattern of an 11-element array over the same set of frequencies.
pattern(array,plotFreq,[-180:180],0,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb','PropagationSpeed',c)
5-24
Visualization of Wideband Beamformer Performance
This plot shows that the element pattern mainlobe decreases with frequency.
Apply a subband phase shift beamformer to the array. The direction of interest is 30° azimuth and 0°
elevation. There are 8 subbands.
direction = [30;0];
numbands = 8;
beamformer = phased.SubbandPhaseShiftBeamformer('SensorArray',array,...
'Direction',direction,...
'OperatingFrequency',fc,'PropagationSpeed',c,...
'SampleRate',1e3,...
'WeightsOutputPort',true,'SubbandsOutputPort',true,...
'NumSubbands',numbands);
rx = ones(numbands,numels);
[y,w,centerfreqs] = beamformer(rx);
Plot the response pattern of the array using the weights and center frequencies from the beamformer.
pattern(array,centerfreqs.',[-180:180],0,'Weights',w,'CoordinateSystem','rectangular',...
'PlotStyle','waterfall','Type','powerdb','PropagationSpeed',c)
5-25
5 Beamforming
The above plot shows the beamformed pattern at the center frequency of each subband.
centerfreqs = fftshift(centerfreqs);
w = fftshift(w,2);
idx = [1,5,8];
pattern(array,centerfreqs(idx).',[-180:180],0,'Weights',w(:,idx),'CoordinateSystem','rectangular'
'PlotStyle','overlay','Type','powerdb','PropagationSpeed',c)
legend('Location','South')
5-26
Visualization of Wideband Beamformer Performance
This plot shows that the main beam direction remains constant while the beamwidth decreases with
frequency.
5-27
5 Beamforming
MVDR Objective
The MVDR beamformer preserves the gain in the direction of arrival of a desired signal and
attenuates interference from other directions [1], [2].
Given readings from a sensor array, such as the uniform linear array (ULA) in the following diagram,
form data matrix from samples of the array, where is an -by-1 column vector of readings from
the array sampled at time , and is one row of matrix . Many more samples are taken than
there are elements in the array. This results in the number of rows in being much greater than the
number of columns. An estimate of the covariance matrix is , where is the Hermitian or
complex-conjugate transpose of .
Compute the MVDR beamformer response by solving the following equation for , where is a
steering vector pointing in the direction of the desired signal.
The MVDR weight vector is computed from and using the following equation, which normalizes
to preserve the gain in the direction of arrival of the desired signal.
The MVDR system response is the inner product between the MVDR weight vector and a current
sample from the sensor array .
5-28
Fixed-Point HDL-Optimized Minimum-Variance Distortionless-Response (MVDR) Beamformer
HDL-Optimized MVDR
The three equations in the previous section are implemented by the three main blocks in the
following model. The rate changes give the matrix solve additional clock cycles to update before the
next input sample. The number of clock cycles between a valid input and when the complex matrix
solve block is ready is twice its input wordlength to allow time for CORDIC iterations, plus 15 cycles
for internal delays.
load_system('MVDRBeamformerHDLOptimizedModel');
open_system('MVDRBeamformerHDLOptimizedModel/MVDR - HDL Optimized')
Instead of forming data matrix and computing the Cholesky factorization of covariance matrix
, the upper-triangular matrix of the QR decomposition of is computed directly and updated as
each data vector streams in from the sensor array. Because the data is updated indefinitely, a
forgetting factor is applied after each factorization. To integrate with an equivalent of a matrix of
rows, the forgetting factor should be set to
This example simulates the equivalent of a matrix with rows, so the forgetting factor is set
to 0.9983.
The Complex Partial-Systolic Matrix Solve Using Q-less QR Decomposition with Forgetting
Factor block is implemented using the method found in [3]. The upper-triangular matrix from the
QR decomposition of is identical to the Cholesky factorization of except the signs of values on
the diagonal. Solving the matrix equation by computing the Cholesky factorization of
is not as efficient or as numerically sound as computing the QR decomposition of directly [4].
Run Model
open_system('MVDRBeamformerHDLOptimizedModel')
5-29
5 Beamforming
As the model is simulating, you can adjust the signal direction, steering angle and noise directions by
dragging the sliders, or by editing the constant values.
When the signal direction and steering angle are aligned as indicated by the blue and green lines, you
can see that the beam pattern has a gain of 0 dB. The noise sources are nulled as indicated by the red
lines.
5-30
Fixed-Point HDL-Optimized Minimum-Variance Distortionless-Response (MVDR) Beamformer
The desired pulse appears when the noise sources are nulled. This example simulates with the same
latency as the hardware, so you can see the signal settle over time as the simulation starts and when
the directions are changed.
Set Parameters
The parameters for the beamformer are set in the model workspace. You can modify the parameters
by editing and running the setMVDRExampleModelWorkspace function.
References
[1] V. Behar et al. "Parameter Optimization of the adaptive MVDR QR-based beamformer for jamming
and multipath supression in GPS/GLONASS receivers". In: Proc. 16th Saint Petersburg International
Conference on Integrated Navigation systems. Saint Petersburg, Russia, May 2009, pp. 325--334.
[2] Jack Capon. "High-resolution frequency-wavenumber spectrum analysis". In: vol. 57. 1969, pp.
1408--1418.
[3] C.M. Rader. "VLSI Systolic Arrays for Adaptive Nulling". In: IEEE Signal Processing Magazine
(July 1996), pp. 29--49.
5-31
5 Beamforming
[4] Charles F. Van Loan. Introduction to Scientific Computing: A Matrix-Vector Approach Using
Matlab. Second edition. Prentice-Hall, 2000. isbn: 0-13-949157-0.
5-32
6
Direction-of-Arrival Estimation
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Construct a uniform linear array (ULA) consisting of ten isotropic antenna elements. The carrier
frequency of the incoming narrowband sources is 1 GHz.
fc = 1e9;
lambda = physconst('LightSpeed')/fc;
antenna = phased.IsotropicAntennaElement('FrequencyRange',[8e8 1.2e9]);
array = phased.ULA('Element',antenna,'NumElements',10,'ElementSpacing',lambda/2);
The incident wavefield consists of linear FM pulses from two sources. The DOAs of the two sources
are 30° azimuth and 60° azimuth. Both sources have elevation angles of 0°.
waveform = phased.LinearFMWaveform('SweepBandwidth',1e5,...
'PulseWidth',5e-6,'OutputFormat','Pulses','NumPulses',1);
sig1 = waveform();
sig2 = sig1;
ang1 = [30; 0];
ang2 = [60;0];
arraysig = collectPlaneWave(array,[sig1 sig2],[ang1 ang2],fc);
rng default
npower = 0.01;
noise = sqrt(npower/2)*...
(randn(size(arraysig)) + 1i*randn(size(arraysig)));
rxsig = arraysig + noise;
Implement a beamscan DOA estimator. Scan the azimuth angles from −90° to 90°. Output the DOA
estimates, and plot the spatial spectrum. The locations of the two largest peaks of the spectrum
identify the DOAs of the signals.
estimator = phased.BeamscanEstimator('SensorArray',array,...
'OperatingFrequency',fc,'ScanAngles',-90:90,...
'DOAOutputPort',true,'NumSignals',2);
[y,sigang] = estimator(rxsig);
disp(sigang)
64 28
plotSpectrum(estimator)
6-2
Beamscan Direction-of-Arrival Estimation
See Also
Related Examples
• “Super-Resolution DOA Estimation” on page 6-4
6-3
6 Direction-of-Arrival Estimation
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Plot the array response of the ULA. Zoom in on the main lobe.
fc = 1e9;
lambda = physconst('LightSpeed')/fc;
array = phased.ULA('NumElements',10,'ElementSpacing',lambda/2);
array.Element.FrequencyRange = [8e8 1.2e9];
plotResponse(array,fc,physconst('LightSpeed'))
axis([-25 25 -30 0]);
6-4
Super-Resolution DOA Estimation
npower = 0.01;
rxsig = sensorsig(getElementPosition(array)/lambda,...
Nsnapshots,[ang1 ang2],npower);
Estimate the directions of arrival using the beamscan estimator. Because both DOAs fall inside the
main lobe of the array response, the beamscan DOA estimator cannot resolve them as separate
sources.
beamscanestimator = phased.BeamscanEstimator('SensorArray',array,...
'OperatingFrequency',fc,'ScanAngles',-90:90,...
'DOAOutputPort',true,'NumSignals',2);
[~,sigang] = beamscanestimator(rxsig);
plotSpectrum(beamscanestimator)
Use the super-resolution DOA estimator to estimate the two directions. This estimator offers better
resolution than the nonparametric beamscan estimator.
MUSICestimator = phased.RootMUSICEstimator('SensorArray',array,...
'OperatingFrequency',fc,'NumSignalsSource','Property',...
'NumSignals',2,'ForwardBackwardAveraging',true);
doa_est = MUSICestimator(rxsig)
doa_est = 1×2
40.0091 30.0048
6-5
6 Direction-of-Arrival Estimation
See Also
phased.RootMUSICEstimator
Related Examples
• “Beamscan Direction-of-Arrival Estimation” on page 6-2
• “High Resolution Direction of Arrival Estimation” on page 17-214
6-6
MUSIC Super-Resolution DOA Estimation
Signal Model
The signal model relates the received sensor data to the signals emitted by the source. Assume that
there are D uncorrelated or partially correlated signal sources, sd(t). The sensor data, xm(t), consists
of the signals, as received at the array, together with noise, nm(t). A sensor data snapshot is the
sensor data vector received at the M elements of an array at a single time t.
• x(t) is an M-by-1 vector of received snapshot of sensor data which consist of signals and additive
noise.
• A is an M-by-D matrix containing the arrival vectors. An arrival vector consists of the relative
phase shifts at the array elements of the plane wave from one source. Each column of A
represents the arrival vector from one of the sources and depends on the direction of arrival, θd.
θd is the direction of arrival angle for the dth source and can represents either the broadside angle
for linear arrays or the azimuth and elevation angle for planar or 3D arrays.
• s(t) is a D-by-1 vector of source signal values from D sources.
• n(t) is an M-by-1 vector of sensor noise values.
An important quantity in any subspace method is the sensor covariance matrix,Rx, derived from the
received signal data. When the signals are uncorrelated with the noise, the sensor covariance matrix
has two components, the signal covariance matrix and the noise covariance matrix.
where Rs is the source covariance matrix. The diagonal elements of the source covariance matrix
represent source power and the off-diagonal elements represent source correlations.
Rs = E ssH
The signal covariance matrix, ARsAH, is an M-by-M matrix, also with rank D < M.
6-7
6 Direction-of-Arrival Estimation
An assumption of the MUSIC algorithm is that the noise powers are equal at all sensors and
uncorrelated between sensors. With this assumption, the noise covariance matrix becomes an M-by-M
diagonal matrix with equal values along the diagonal.
Because the true sensor covariance matrix is not known, MUSIC estimates the sensor covariance
matrix, Rx, from the sample sensor covariance matrix. The sample sensor covariance matrix is an
average of multiple snapshots of the sensor data
T
1 H
T k∑
Rx = x(t)x(t) ,
=1
When noise is added, the eigenvectors of the sensor covariance matrix with noise present are the
same as the noise-free sensor covariance matrix. The eigenvalues increase by the noise power. Let vi
be one of the original noise-free signal space eigenvectors. Then
The null subspace eigenvectors are also eigenvectors of Rx. Let ui be one of the null eigenvectors.
Then
with eigenvalues of σ02 instead of zero. The null subspace becomes the noise subspace.
MUSIC works by searching for all arrival vectors that are orthogonal to the noise subspace. To do the
search, MUSIC constructs an arrival-angle-dependent power expression, called the MUSIC
pseudospectrum:
1
PMUSIC(ϕ ) =
aH(ϕ )UnUnHa(ϕ )
When an arrival vector is orthogonal to the noise subspace, the peaks of the pseudospectrum are
infinite. In practice, because there is noise, and because the true covariance matrix is estimated by
the sampled covariance matrix, the arrival vectors are never exactly orthogonal to the noise
subspace. Then, the angles at which PMUSIC has finite peaks are the desired directions of arrival.
6-8
MUSIC Super-Resolution DOA Estimation
Because the pseudospectrum can have more peaks than there are sources, the algorithm requires
that you specify the number of sources, D, as a parameter. Then the algorithm picks the D largest
peaks. For a uniform linear array (ULA), the search space is a one-dimensional grid of broadside
angles. For planar and 3D arrays, the search space is a two-dimensional grid of azimuth and elevation
angles.
Root-MUSIC
For a ULA, the denominator in the pseudospectrum is a polynomial in eikdcosφ, but can also be
considered a polynomial in the complex plane. In this cases, you can use root-finding methods to
solve for the roots, zi. These roots do not necessarily lie on the unit circle. However, Root-MUSIC
assumes that the D roots closest to the unit circle correspond to the true source directions. Then you
can compute the source directions from the phase of the complex roots.
Spatial smoothing takes advantage of the translation properties of a uniform array. Consider two
correlated signals arriving at an L-element ULA. The source covariance matrix, Rs is a singular 2-by-2
matrix. The arrival vector matrix is an L-by-2 matrix
1 1
ikdcosφ1 ikdcosφ2
e e
A1 = = a φ1 a φ2
⋮ ⋮
i(L − 1)kdcosφ1 i(L − 1)kdcosφ2
e e
for signals arriving from the broadside angles φ1 and φ2. The quantity k is the signal wave number.
a(φ) represents an arrival vector at the angle φ.
You can create a second array by translating the first array along its axis by one element distance, d.
The arrival matrix for the second array is
ikdcosφ1 ikdcosφ2
e e
i2kdcosφ1 i2kdcosφ2
e e ikdcosφ1 ikdcosφ2
A2 = = e a φ1 e a φ2
⋮ ⋮
iLkdcosφ1 iLkdcosφ2
e e
where the arrival vectors are equal to the original arrival vectors but multiplied by a direction-
dependent phase shift. When you translate the original array J –1 more times, you get J copies of the
array. If you form a single array from all these copies, then the length of the single array is M = L + (J
– 1).
In practice, you start with an M-element array and form J overlapping subarrays. The number of
elements in each subarray is L = M – J + 1. The following figure shows the relationship between the
overall length of the array, M, the number of subarrays, J, and the length of each subarray, L.
6-9
6 Direction-of-Arrival Estimation
ik p − 1 dcosφ1 ik p − 1 dcosφ2
Ap = e a φ1 e a φ2
ik p − 1 dcosφ1
e 0
= a φ1 a φ2 = A1Pp − 1
ik p − 1 dcosφ2
0 e
ikdcosφ1
e 0
P= .
ikdcosφ2
0 e
The last step is averaging the signal covariance matrices over all J subarrays to form the averaged
signal covariance matrix, Ravgs. The average signal covariance matrix depends on the smoothed
source covariance matrix, Rsmooth.
J
1 H
Rsavg = A1 Pp − 1Rs Pp − 1 A1H = A1Rsmooth A1H
J p∑
=1
J
1 H
Rsmooth = Pp − 1Rs Pp − 1
J p∑
.
=1
You can show that the diagonal elements of the smoothed source covariance matrix are the same as
the diagonal elements of the original source covariance matrix.
J J
1 1
Riismooth = Pp − 1 Pp − 1
J p∑ J p∑
Rs mn H = Rs = Rs ii
im ni
=1 =1
However, the off-diagonal elements are reduced. The reduction factor is the beam pattern of a J-
element array.
6-10
MUSIC Super-Resolution DOA Estimation
J
1 ikd(p − 1) cosφ1 − cosφ2 1 sin kd J cosφ1 − cosφ2
Rismooth
J p∑
j = e Rs ij = Rs ij
=1
J sin kd cosφ1 − cosφ2
In summary, you can reduce the degrading effect of source correlation by forming subarrays and
using the smoothed covariance matrix as input to the MUSIC algorithm. Because of the beam pattern,
larger angular separation of sources leads to reduced correlation.
Spatial smoothing for linear arrays is easily extended to 2D and 3D uniform arrays.
6-11
6 Direction-of-Arrival Estimation
Assume the target is located at [0,10000,20000] with respect to the radar in the radar's local
coordinate system. Assume that the target is moving along the y-axis toward the radar at 800 kph.
x0 = [0,10000,20000].';
v0 = -800;
v0 = v0*1000/3600;
targetplatform = phased.Platform(x0,[0,v0,0].');
The monopulse tracker uses a ULA array which consists of 8 isotropic antenna elements. The element
spacing is set to one-half the signal wavelength.
fc = 500e6;
c = physconst('LightSpeed');
lam = c/fc;
antenna = phased.IsotropicAntennaElement('FrequencyRange',[100e6,800e6],...
'BackBaffled',true);
array = phased.ULA('Element',antenna,'NumElements',8,...
'ElementSpacing',lam/2);
Assume a narrowband signal. This kind of signal can be simulated using the
phased.SteeringVector System object.
steervec = phased.SteeringVector('SensorArray',array);
Tracking Loop
tracker = phased.SumDifferenceMonopulseTracker('SensorArray',array,...
'PropagationSpeed',c,...
'OperatingFrequency',fc);
At each time step, compute the broadside angle of the target with respect to the array. Set the step
time to 0.5 seconds.
T = 0.5;
nsteps = 40;
t = [1:nsteps]*T;
6-12
Target Tracking Using Sum-Difference Monopulse Radar
rng = zeros(1,nsteps);
broadang_actual = zeros(1,nsteps);
broadang_est = zeros(1,nsteps);
angerr = zeros(1,nsteps);
Step through the tracking loop. First provide an estimate of the initial broadside angle. In this
simulation, the actual broadside angle is known but add an error of five degrees.
[tgtrng,tgtang_actual] = rangeangle(x0,[0,0,0].');
broadang0 = az2broadside(tgtang_actual(1),tgtang_actual(2));
broadang_prev = broadang0 + 5.0; % add some sort of error
for n = 1:nsteps
x = targetplatform(T);
[rng(n),tgtang_actual] = rangeangle(x,[0,0,0].');
broadang_actual(n) = az2broadside(tgtang_actual(1),tgtang_actual(2));
signl = steervec(fc,broadang_actual(n)).';
broadang_est(n) = tracker(signl,broadang_prev);
broadang_prev = broadang_est(n);
angerr(n) = broadang_est(n) - broadang_actual(n);
end
Results
Plot the range as a function of time showing the point of closest approach.
plot(t,rng/1000,'-o')
xlabel('time (sec)')
ylabel('Range (km)')
6-13
6 Direction-of-Arrival Estimation
plot(t,broadang_actual,'-o')
xlabel('time (sec)')
ylabel('Broadside angle (deg)')
6-14
Target Tracking Using Sum-Difference Monopulse Radar
A monopulse tracker cannot solve for the direction angle if the angular separation between samples
is too large. The maximum allowable angular separation is approximately one-half the null-to-null
beamwidth of the array. For an 8-element, half-wavelength-spaced ULA, the half-beamwidth is
approximately 14.3 degrees at broadside. In this simulation, the largest angular difference between
samples is
maxangdiff = max(abs(diff(broadang_est)));
disp(maxangdiff)
0.2942
Plot the angle error. This is the difference between the estimated angle and the actual angle. The plot
shows a very small error, on the order of microdegrees.
plot(t,angerr,'-o')
xlabel('time (sec)')
ylabel('Angle error (deg)')
6-15
6 Direction-of-Arrival Estimation
See Also
Functions
musicdoa
Objects
phased.MUSICEstimator | phased.MUSICEstimator2D
6-16
7
Angle-Doppler Response
In this section...
“Benefits of Visualizing Angle-Doppler Response” on page 7-2
“Angle-Doppler Response of Stationary Array to Stationary Target” on page 7-2
“Angle-Doppler Response to Stationary Target at Moving Array” on page 7-4
You can use the phased.AngleDopplerResponse object to visualize the angle-Doppler response of
input data. The phased.AngleDopplerResponse object uses a conventional narrowband (phase
shift) beamformer and an FFT-based Doppler filter to compute the angle-Doppler response.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Construct the objects needed to simulate the target response at the array.
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e8 5e9],'BackBaffled',true);
lambda = physconst('LightSpeed')/4e9;
array = phased.ULA(6,'Element',antenna,'ElementSpacing',lambda/2);
waveform = phased.RectangularWaveform('PulseWidth',2e-006,...
'PRF',5e3,'SampleRate',1e6,'NumPulses',1);
radiator = phased.Radiator('Sensor',array,...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9);
collector = phased.Collector('Sensor',array,...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9);
txplatform = phased.Platform('InitialPosition',[0;0;0],...
'Velocity',[0;0;0]);
target = phased.RadarTarget('MeanRCS',1,'Model','nonfluctuating');
targetplatform = phased.Platform('InitialPosition',[5e3; 5e3; 0],...
'Velocity',[0;0;0]);
freespace = phased.FreeSpace('OperatingFrequency',4e9,...
'TwoWayPropagation',false,'SampleRate',1e6);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
7-2
Angle-Doppler Response
'EnableInputPort',true,'SampleRate',1e6,'Gain',40);
transmitter = phased.Transmitter('PeakPower',1e4,...
'InUseOutputPort',true,'Gain',40);
Propagate ten rectangular pulses to and from the target, and collect the responses at the array.
PRF = 5e3;
NumPulses = 10;
wav = waveform();
tgtloc = targetplatform.InitialPosition;
txloc = txplatform.InitialPosition;
M = waveform.SampleRate*1/PRF;
N = array.NumElements;
rxsig = zeros(M,N,NumPulses);
for n = 1:NumPulses
% get angle to target
[~,tgtang] = rangeangle(tgtloc,txloc);
% transmit pulse
[txsig,txstatus] = transmitter(wav);
% radiate pulse
txsig = radiator(txsig,tgtang);
% propagate pulse to target
txsig = freespace(txsig,txloc,tgtloc,[0;0;0],[0;0;0]);
% reflect pulse off stationary target
txsig = target(txsig);
% propagate pulse to array
txsig = freespace(txsig,tgtloc,txloc,[0;0;0],[0;0;0]);
% collect pulse
rxsig(:,:,n) = collector(txsig,tgtang);
% receive pulse
rxsig(:,:,n) = receiver(rxsig(:,:,n),~txstatus);
end
Find and plot the angle-Doppler response. Then, add the label +Target at the expected azimuth
angle and Doppler frequency.
tgtdoppler = 0;
tgtLocation = global2localcoord(tgtloc,'rs',txloc);
tgtazang = tgtLocation(1);
tgtelang = tgtLocation(2);
tgtrng = tgtLocation(3);
tgtcell = val2ind(tgtrng,...
physconst('LightSpeed')/(2*waveform.SampleRate));
snapshot = shiftdim(rxsig(tgtcell,:,:)); % Remove singleton dim
response = phased.AngleDopplerResponse('SensorArray',array,...
'OperatingFrequency',4e9, ...
'PropagationSpeed',physconst('LightSpeed'),...
'PRF',PRF, 'ElevationAngle',tgtelang);
plotResponse(response,snapshot);
text(tgtazang,tgtdoppler,'+Target');
7-3
7 Space-Time Adaptive Processing (STAP)
As expected, the angle-Doppler response shows the greatest response at zero Doppler and 45°
azimuth.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
The scenario in this example is identical to that of “Angle-Doppler Response of Stationary Array to
Stationary Target” on page 7-2 except that the ULA is moving at a constant velocity. For convenience,
the MATLAB™ code to set up the objects is repeated. Notice that the InitialPosition and
Velocity properties of the txplatform System object™ have changed. The InitialPosition
property value is set to simulate an airborne ULA. The motivation for selecting the particular value of
the Velocity property is explained in “Applicability of DPCA Pulse Canceller” on page 7-7.
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e8 5e9],'BackBaffled',true);
lambda = physconst('LightSpeed')/4e9;
array = phased.ULA(6,'Element',antenna,'ElementSpacing',lambda/2);
7-4
Angle-Doppler Response
waveform = phased.RectangularWaveform('PulseWidth',2e-006,...
'PRF',5e3,'SampleRate',1e6,'NumPulses',1);
radiator = phased.Radiator('Sensor',array,...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9);
collector = phased.Collector('Sensor',array,...
'PropagationSpeed',physconst('LightSpeed'),...
'OperatingFrequency',4e9);
vy = (array.ElementSpacing*waveform.PRF)/2;
txplatform = phased.Platform('InitialPosition',[0;0;3e3],...
'Velocity',[0;vy;0]);
target = phased.RadarTarget('MeanRCS',1,'Model','nonfluctuating');
tgtvel = [0;0;0];
targetplatform = phased.Platform('InitialPosition',[5e3; 5e3; 0],...
'Velocity',tgtvel);
freespace = phased.FreeSpace('OperatingFrequency',4e9,...
'TwoWayPropagation',false,'SampleRate',1e6);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
'EnableInputPort',true,'SampleRate',1e6,'Gain',40);
transmitter = phased.Transmitter('PeakPower',1e4,...
'InUseOutputPort',true,'Gain',40);
Transmit ten rectangular pulses toward the target as the ULA is moving. Then, collect the received
echoes.
PRF = 5e3;
NumPulses = 10;
wav = waveform();
tgtloc = targetplatform.InitialPosition;
M = waveform.SampleRate*1/PRF;
N = array.NumElements;
rxsig = zeros(M,N,NumPulses);
fasttime = unigrid(0,1/waveform.SampleRate,1/PRF,'[)');
rangebins = (physconst('LightSpeed')*fasttime)/2;
for n = 1:NumPulses
% move transmitter
[txloc,txvel] = txplatform(1/PRF);
% get angle to target
[~,tgtang] = rangeangle(tgtloc,txloc);
% transmit pulse
[txsig,txstatus] = transmitter(wav);
% radiate pulse
txsig = radiator(txsig,tgtang);
% propagate pulse to target
txsig = freespace(txsig,txloc,tgtloc,txvel,tgtvel);
% reflect pulse off stationary target
txsig = target(txsig);
% propagate pulse to array
txsig = freespace(txsig,tgtloc,txloc,tgtvel,txvel);
% collect pulse
rxsig(:,:,n) = collector(txsig,tgtang);
% receive pulse
rxsig(:,:,n) = receiver(rxsig(:,:,n),~txstatus);
end
Calculate the target angles and range with respect to the ULA. Then, calculate the Doppler shift
induced by the motion of the phased array.
7-5
7 Space-Time Adaptive Processing (STAP)
sp = radialspeed(tgtloc,tgtvel,txloc,txvel);
tgtdoppler = 2*speed2dop(sp,lambda);
tgtLocation = global2localcoord(tgtloc,'rs',txloc);
tgtazang = tgtLocation(1);
tgtelang = tgtLocation(2);
tgtrng = tgtLocation(3);
The two-way Doppler shift is approximately 1626 Hz. The azimuth angle is 45° and is identical to the
value obtained in the stationary ULA example.
tgtcell = val2ind(tgtrng,...
physconst('LightSpeed')/(2*waveform.SampleRate));
snapshot = shiftdim(rxsig(tgtcell,:,:)); % Remove singleton dim
hadresp = phased.AngleDopplerResponse('SensorArray',array,...
'OperatingFrequency',4e9, ...
'PropagationSpeed',physconst('LightSpeed'),...
'PRF',PRF, 'ElevationAngle',tgtelang);
plotResponse(hadresp,snapshot);
text(tgtazang,tgtdoppler,'+Target');
The angle-Doppler response shows the greatest response at 45° azimuth at the expected Doppler
shift.
7-6
Displaced Phase Center Antenna Pulse Canceller
You can implement a DPCA pulse canceller with phased.DPCACanceller. This implementation
assumes that the entire array is used on transmit. On receive, the array is divided into two subarrays.
The phase centers of the subarrays are separated by twice the distance the platform moves in one
pulse repetition interval.
The DPCA pulse canceller is applicable when both these conditions are true:
vT = d/2 (7-1)
where:
The target has a nonfluctuating RCS of 1 square meter and moves with a constant velocity vector of
(15,15,0).
7-7
7 Space-Time Adaptive Processing (STAP)
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
PRF = 5e3;
fc = 4e9;
fs = 1e6;
c = physconst('LightSpeed');
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e8 5e9],'BackBaffled',true);
lambda = c/fc;
array = phased.ULA(6,'Element',antenna,'ElementSpacing',lambda/2);
waveform = phased.RectangularWaveform('PulseWidth',2e-6,...
'PRF',PRF,'SampleRate',fs,'NumPulses',1);
radiator = phased.Radiator('Sensor',array,...
'PropagationSpeed',c,...
'OperatingFrequency',fc);
collector = phased.Collector('Sensor',array,...
'PropagationSpeed',c,...
'OperatingFrequency',fc);
vy = (array.ElementSpacing*PRF)/2;
transmitplatform = phased.Platform('InitialPosition',[0;0;3e3],...
'Velocity',[0;vy;0]);
clutter = phased.ConstantGammaClutter('Sensor',array,...
'PropagationSpeed',radiator.PropagationSpeed,...
'OperatingFrequency',radiator.OperatingFrequency,...
'SampleRate',fs,...
'TransmitSignalInputPort',true,...
'PRF',PRF,...
'Gamma',surfacegamma('woods',radiator.OperatingFrequency),...
'EarthModel','Flat',...
'BroadsideDepressionAngle',0,...
'MaximumRange',radiator.PropagationSpeed/(2*PRF),...
'PlatformHeight',transmitplatform.InitialPosition(3),...
'PlatformSpeed',norm(transmitplatform.Velocity),...
'PlatformDirection',[90;0]);
target = phased.RadarTarget('MeanRCS',1,...
'Model','Nonfluctuating','OperatingFrequency',fc);
targetplatform = phased.Platform('InitialPosition',[5e3; 5e3; 0],...
'Velocity',[15;15;0]);
channel = phased.FreeSpace('OperatingFrequency',fc,...
'TwoWayPropagation',false,'SampleRate',fs);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
'EnableInputPort',true,'SampleRate',fs,'Gain',40);
transmitter = phased.Transmitter('PeakPower',1e4,...
'InUseOutputPort',true,'Gain',40);
Propagate ten rectangular pulses to target and back, and collect the responses at the array. Also,
compute clutter echoes using the constant gamma model with a gamma value corresponding to
wooded terrain.
NumPulses = 10;
wav = waveform();
M = fs/PRF;
N = array.NumElements;
rxsig = zeros(M,N,NumPulses);
csig = zeros(M,N,NumPulses);
fasttime = unigrid(0,1/fs,1/PRF,'[)');
7-8
Displaced Phase Center Antenna Pulse Canceller
rangebins = (c*fasttime)/2;
clutter.SeedSource = 'Property';
clutter.Seed = 5;
for n = 1:NumPulses
[txloc,txvel] = transmitplatform(1/PRF); % move transmitter
[tgtloc,tgtvel] = targetplatform(1/PRF); % move target
[~,tgtang] = rangeangle(tgtloc,txloc); % get angle to target
[txsig1,txstatus] = transmitter(wav); % transmit pulse
csig(:,:,n) = clutter(txsig1(abs(txsig1)>0)); % collect clutter
txsig = radiator(txsig1,tgtang); % radiate pulse
txsig = channel(txsig,txloc,tgtloc,...
txvel,tgtvel); % propagate to target
txsig = target(txsig); % reflect off target
txsig = channel(txsig,tgtloc,txloc,...
tgtvel,txvel); % propagate to array
rxsig(:,:,n) = collector(txsig,tgtang); % collect pulse
rxsig(:,:,n) = receiver(rxsig(:,:,n) + csig(:,:,n),...
~txstatus); % receive pulse plus clutter return
end
Determine the target range, range gate, and two-way Doppler shift.
sp = radialspeed(tgtloc,tgtvel,txloc,txvel);
tgtdoppler = 2*speed2dop(sp,lambda);
tgtLocation = global2localcoord(tgtloc,'rs',txloc);
tgtazang = tgtLocation(1);
tgtelang = tgtLocation(2);
tgtrng = tgtLocation(3);
tgtcell = val2ind(tgtrng,c/(2*fs));
Use noncoherent pulse integration to visualize the signal received by the ULA for the first of the ten
pulses. Mark the target range gate with a vertical dashed line.
firstpulse = pulsint(rxsig(:,:,1),'noncoherent');
plot([tgtrng/1e3 tgtrng/1e3],[0 0.1],'-.',rangebins/1e3,firstpulse)
title('Noncoherent Integration of 1st Pulse at the ULA')
xlabel('Range (km)')
ylabel('Magnitude');
7-9
7 Space-Time Adaptive Processing (STAP)
The large-magnitude clutter returns obscure the presence of the target. Apply the DPCA pulse
canceller to reject the clutter.
canceller = phased.DPCACanceller('SensorArray',array,'PRF',PRF,...
'PropagationSpeed',c,...
'OperatingFrequency',fc,...
'Direction',[0;0],'Doppler',tgtdoppler,...
'WeightsOutputPort',true);
[y,w] = canceller(rxsig,tgtcell);
Plot the result of applying the DPCA pulse canceller. Mark the target range gate with a vertical
dashed line.
plot([tgtrng/1e3,tgtrng/1e3],[0 3.5e-5],'-.',rangebins/1e3,abs(y))
title('DPCA Canceller Output')
xlabel('Range (km)')
ylabel('Magnitude')
7-10
Displaced Phase Center Antenna Pulse Canceller
The DPCA pulse canceller has significantly rejected the clutter. As a result, the target is visible at the
expected range gate.
7-11
7 Space-Time Adaptive Processing (STAP)
• Jamming and other interference effects are substantial. The DPCA pulse canceller is susceptible to
interference because the DPCA pulse canceller does not use the received data.
• The sample matrix inversion (SMI) algorithm is inapplicable because of computational expense or
a rapidly changing environment.
• The number of training cells. The algorithm uses training cells to estimate the interference. In
general, a larger number of training cells leads to a better estimate of interference.
• The number of guard cells close to the target cells. The algorithm recognizes guard cells to
prevent target returns from contaminating the estimate of the interference.
To repeat the scenario for convenience, the airborne radar platform is a six-element ULA operating at
4 GHz. The array elements are spaced at one-half the wavelength of the 4 GHz carrier frequency. The
radar emits ten rectangular pulses of two μs duration with a PRF of 5 kHz. The platform is moving
along the array axis with a speed equal to one-half the product of the element spacing and the PRF.
ADPCA pulse cancellation is applicable because vT = d/2 where
The target has a nonfluctuating RCS of 1 square meter and is moving with a constant velocity vector
of (15,15,0).
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
7-12
Adaptive Displaced Phase Center Antenna Pulse Canceller
PRF = 5e3;
fc = 4e9;
fs = 1e6;
c = physconst('LightSpeed');
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e8 5e9],'BackBaffled',true);
lambda = c/fc;
array = phased.ULA(6,'Element',antenna,'ElementSpacing',lambda/2);
waveform = phased.RectangularWaveform('PulseWidth', 2e-6,...
'PRF',PRF,'SampleRate',fs,'NumPulses',1);
radiator = phased.Radiator('Sensor',array,...
'PropagationSpeed',c,...
'OperatingFrequency',fc);
collector = phased.Collector('Sensor',array,...
'PropagationSpeed',c,...
'OperatingFrequency',fc);
vy = (array.ElementSpacing * PRF)/2;
transmitterplatform = phased.Platform('InitialPosition',[0;0;3e3],...
'Velocity',[0;vy;0]);
clutter = phased.ConstantGammaClutter('Sensor',array,...
'PropagationSpeed',radiator.PropagationSpeed,...
'OperatingFrequency',radiator.OperatingFrequency,...
'SampleRate',fs,...
'TransmitSignalInputPort',true,...
'PRF',PRF,...
'Gamma',surfacegamma('woods',radiator.OperatingFrequency),...
'EarthModel','Flat',...
'BroadsideDepressionAngle',0,...
'MaximumRange',radiator.PropagationSpeed/(2*PRF),...
'PlatformHeight',transmitterplatform.InitialPosition(3),...
'PlatformSpeed',norm(transmitterplatform.Velocity),...
'PlatformDirection',[90;0]);
target = phased.RadarTarget('MeanRCS',1,...
'Model','Nonfluctuating','OperatingFrequency',fc);
targetplatform = phased.Platform('InitialPosition',[5e3; 5e3; 0],...
'Velocity',[15;15;0]);
jammer = phased.BarrageJammer('ERP',1e3,'SamplesPerFrame',200);
jammerplatform = phased.Platform(...
'InitialPosition',[3.5e3; 1e3; 0],'Velocity',[0;0;0]);
channel = phased.FreeSpace('OperatingFrequency',fc,...
'TwoWayPropagation',false,'SampleRate',fs);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
'EnableInputPort',true,'SampleRate',fs,'Gain',40);
transmitter = phased.Transmitter('PeakPower',1e4,...
'InUseOutputPort',true,'Gain',40);
Propagate the ten rectangular pulses to the target and back and collect the responses at the array.
Compute clutter echoes using the constant gamma model with a gamma value corresponding to
wooded terrain. Also, propagate the jamming signal from the jammer location to the airborne ULA.
NumPulses = 10;
wav = waveform();
M = fs/PRF;
N = array.NumElements;
rxsig = zeros(M,N,NumPulses);
csig = zeros(M,N,NumPulses);
jsig = zeros(M,N,NumPulses);
fasttime = unigrid(0,1/fs,1/PRF,'[)');
7-13
7 Space-Time Adaptive Processing (STAP)
rangebins = (c * fasttime)/2;
clutter.SeedSource = 'Property';
clutter.Seed = 40543;
jammer.SeedSource = 'Property';
jammer.Seed = 96703;
receiver.SeedSource = 'Property';
receiver.Seed = 56113;
jamloc = jammerplatform.InitialPosition;
for n = 1:NumPulses
[txloc,txvel] = transmitterplatform(1/PRF); % move transmitter
[tgtloc,tgtvel] = targetplatform(1/PRF); % move target
[~,tgtang] = rangeangle(tgtloc,txloc); % get angle to target
[txsig,txstatus] = transmitter(wav); % transmit pulse
csig(:,:,n) = clutter(txsig(abs(txsig)>0)); % collect clutter
rxsig(:,:,n) = receiver(...
rxsig(:,:,n) + csig(:,:,n) + jsig(:,:,n),...
~txstatus); % receive pulse plus clutter return plus jammer signal
end
Determine the target range, range gate, and two-way Doppler shift.
Process the array responses using the nonadaptive DPCA pulse canceller. To do so, construct the
DPCA object, and apply it to the received signals.
canceller = phased.DPCACanceller('SensorArray',array,'PRF',PRF,...
'PropagationSpeed',c,...
'OperatingFrequency',fc,...
'Direction',[0;0],'Doppler',tgtdoppler,...
'WeightsOutputPort',true);
[y,w] = canceller(rxsig,tgtcell);
Plot the DPCA result with the target range marked by a vertical dashed line. Notice how the presence
of the interference signal has obscured the target.
7-14
Adaptive Displaced Phase Center Antenna Pulse Canceller
plot([tgtrng/1e3,tgtrng/1e3],[0 7e-2],'-.',rangebins/1e3,abs(y))
axis tight
xlabel('Range (km)')
ylabel('Magnitude')
title('DPCA Canceller Output with Jamming')
Apply the adaptive DPCA pulse canceller. Use 100 training cells and 4 guard cells, two on each side of
the target range gate.
canceller = phased.ADPCACanceller('SensorArray',array,'PRF',PRF,...
'PropagationSpeed',c,...
'OperatingFrequency',fc,...
'Direction',[0;0],'Doppler',tgtdoppler,...
'WeightsOutputPort',true,'NumGuardCells',4,...
'NumTrainingCells',100);
[y_adpca,w_adpca] = canceller(rxsig,tgtcell);
Plot the result with the target range marked by a vertical dashed line. Notice how the adaptive DPCA
pulse canceller enables you to detect the target in the presence of the jamming signal.
plot([tgtrng/1e3,tgtrng/1e3],[0 4e-7],'-.',rangebins/1e3,abs(y_adpca))
axis tight
title('ADPCA Canceller Output with Jamming')
xlabel('Range (km)')
ylabel('Magnitude')
7-15
7 Space-Time Adaptive Processing (STAP)
Examine the angle-Doppler response. Notice the presence of the clutter ridge in the angle-Doppler
plane and the null at the jammer’s broadside angle for all Doppler frequencies.
angdopplerersponse = phased.AngleDopplerResponse('SensorArray',array,...
'OperatingFrequency',fc,...
'PropagationSpeed',c,...
'PRF',PRF,'ElevationAngle',tgtelang);
plotResponse(angdopplerersponse,w_adpca)
title('Angle-Doppler Response with ADPCA Pulse Cancellation')
text(az2broadside(jamang(1),jamang(2)) + 10,...
0,'\leftarrow Interference Null')
7-16
Adaptive Displaced Phase Center Antenna Pulse Canceller
7-17
7 Space-Time Adaptive Processing (STAP)
The SMI algorithm is computationally expensive and assumes a stationary environment across many
pulses. If you need to suppress clutter returns and jammer interference with less computation, or in a
rapidly changing environment, consider using an ADPCA pulse canceller instead.
The phased.STAPSMIBeamformer object implements the SMI algorithm. In particular, the object
lets you specify:
• The number of training cells. The algorithm uses training cells to estimate the interference. In
general, a larger number of training cells leads to a better estimate of interference.
• The number of guard cells close to the target cells. The algorithm recognizes guard cells to
prevent target returns from contaminating the estimate of the interference.
To repeat the scenario for convenience, the airborne radar platform is a six-element ULA operating at
4 GHz. The array elements are spaced at one-half the wavelength of the 4 GHz carrier frequency. The
radar emits ten rectangular pulses two μs in duration with a PRF of 5 kHz. The platform is moving
along the array axis with a speed equal to one-half the product of the element spacing and the PRF.
The target has a nonfluctuating RCS of 1 square meter and is moving with a constant velocity vector
of (15,15,0). A stationary broadband barrage jammer is located at (3.5e3,1e3,0). The jammer has an
effective radiated power of 1 kW.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
PRF = 5e3;
fc = 4e9;
fs = 1e6;
c = physconst('LightSpeed');
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e8 5e9],'BackBaffled',true);
lambda = c/fc;
array = phased.ULA(6,'Element',antenna,'ElementSpacing',lambda/2);
7-18
Sample Matrix Inversion Beamformer
Propagate the ten rectangular pulses to and from the target and collect the responses at the array.
Compute clutter echoes using the constant gamma model with a gamma value corresponding to
wooded terrain. Also, propagate the jamming signal from the jammer location to the airborne ULA.
NumPulses = 10;
wav = waveform();
M = fs/PRF;
N = array.NumElements;
rxsig = zeros(M,N,NumPulses);
csig = zeros(M,N,NumPulses);
jsig = zeros(M,N,NumPulses);
fasttime = unigrid(0,1/fs,1/PRF,'[)');
rangebins = (c*fasttime)/2;
clutter.SeedSource = 'Property';
clutter.Seed = 40543;
jammer.SeedSource = 'Property';
jammer.Seed = 96703;
receiverpreamp.SeedSource = 'Property';
receiverpreamp.Seed = 56113;
jamloc = jammerplatform.InitialPosition;
7-19
7 Space-Time Adaptive Processing (STAP)
for n = 1:NumPulses
[txloc,txvel] = transmitterplatform(1/PRF); % move transmitter
[tgtloc,tgtvel] = targetplatform(1/PRF); % move target
[~,tgtang] = rangeangle(tgtloc,txloc); % get angle to target
[txsig,txstatus] = transmitter(wav); % transmit pulse
csig(:,:,n) = clutter(txsig(abs(txsig)>0)); % collect clutter
rxsig(:,:,n) = receiverpreamp(...
rxsig(:,:,n) + csig(:,:,n) + jsig(:,:,n),...
~txstatus); % receive pulse plus clutter return plus jammer signal
end
Determine the target's range, range gate, and two-way Doppler shift.
Construct an SMI beamformer object. Use 100 training cells, 50 on each side of the target range
gate. Use four guard cells, two range gates in front of the target cell and two range gates beyond the
target cell. Obtain the beamformer response and weights.
plot([tgtrng,tgtrng],[0 5e-6],'-.',rangebins,abs(y))
axis tight
title('SMI Beamformer Output')
xlabel('Range (meters)')
ylabel('Magnitude')
7-20
Sample Matrix Inversion Beamformer
response = phased.AngleDopplerResponse('SensorArray',array,...
'OperatingFrequency',4e9,'PRF',PRF,...
'PropagationSpeed',physconst('LightSpeed'));
plotResponse(response,weights)
title('Angle-Doppler Response with SMI Beamforming Weights')
7-21
7 Space-Time Adaptive Processing (STAP)
7-22
8
Detection
When you choose the NP criterion, you can use npwgnthresh to determine the threshold for the
detection of deterministic signals in white Gaussian noise. The optimal decision rule derives from a
likelihood ratio test (LRT). An LRT chooses between the null and alternative hypotheses based on a
ratio of conditional probabilities.
npwgnthresh enables you to specify the maximum false-alarm probability as a constraint. A false
alarm means determining that the data consists of a signal plus noise, when only noise is present.
For details about the statistical assumptions the npwgnthresh function makes, see the reference
page for that function.
Determine the required signal-to-noise (SNR) in decibels for the NP detector when the maximum
tolerable false-alarm probability is 10^-3.
Pfa = 1e-3;
T = npwgnthresh(Pfa,1,'real');
Determine the actual detection threshold corresponding to the desired false-alarm probability,
assuming the variance is 1.
variance = 1;
threshold = sqrt(variance * db2pow(T));
8-2
Neyman-Pearson Hypothesis Testing
Verify empirically that the detection threshold results in the desired false-alarm probability under the
null hypothesis. To do so, generate 1 million samples of a Gaussian random variable, and determine
the proportion of samples that exceed the threshold.
rng default
N = 1e6;
x = sqrt(variance) * randn(N,1);
falsealarmrate = sum(x > threshold)/N
falsealarmrate = 9.9500e-04
Plot the first 10,000 samples. The red horizontal line shows the detection threshold.
x1 = x(1:1e4);
plot(x1)
line([1 length(x1)],[threshold threshold],'Color','red')
xlabel('Sample')
ylabel('Value')
You can see that few sample values exceed the threshold. This result is expected because of the small
false-alarm probability.
8-3
8 Detection
Determine the required SNR for the NP detector when the maximum tolerable false-alarm probability
−3
is 10 .
pfa = 1e-3;
T = npwgnthresh(pfa,2,'real');
falsealarmrate = 9.8900e-04
Determine the required SNR for the NP detector in a coherent detection scheme with one sample.
−3
Use a maximum tolerable false-alarm probability of 10 .
pfa = 1e-3;
T = npwgnthresh(pfa,1,'coherent');
Test that this threshold empirically results in the correct false-alarm rate The sufficient statistic in the
complex-valued case is the real part of the received sample.
rng default
variance = 1;
N = 1e6;
x = sqrt(variance/2)*(randn(N,1)+1j*randn(N,1));
threshold = sqrt(variance*db2pow(T));
falsealarmrate = sum(real(x)>threshold)/length(x)
falsealarmrate = 9.9500e-04
8-4
Neyman-Pearson Hypothesis Testing
8-5
8 Detection
If you are interested in examining the effect of varying the false-alarm probability on the probability
of detection for a fixed SNR, you can use rocsnr. For example, the threshold SNR for the Neyman-
Pearson detector of a single sample in real-valued Gaussian noise is approximately 13.5 dB. Use
rocsnr to plot the probability of detection varies as a function of the false-alarm rate at that SNR.
T = npwgnthresh(1e-6,1,'real');
rocsnr(T,'SignalType','real')
The ROC curve lets you easily read off the probability of detection for a given false-alarm rate.
You can use rocsnr to examine detector performance for different received signal types at a fixed
SNR.
SNR = 13.54;
[Pd_real,Pfa_real] = rocsnr(SNR,'SignalType','real',...
'MinPfa',1e-8);
[Pd_coh,Pfa_coh] = rocsnr(SNR,...
'SignalType','NonfluctuatingCoherent',...
'MinPfa',1e-8);
[Pd_noncoh,Pfa_noncoh] = rocsnr(SNR,'SignalType',...
'NonfluctuatingNoncoherent','MinPfa',1e-8);
8-6
Receiver Operating Characteristics
semilogx(Pfa_real,Pd_real)
hold on
grid on
semilogx(Pfa_coh,Pd_coh,'r')
semilogx(Pfa_noncoh,Pd_noncoh,'k')
xlabel('False-Alarm Probability')
ylabel('Probability of Detection')
legend('Real','Coherent','Noncoherent','location','southeast')
title('ROC Curve Comparison for Nonfluctuating RCS Target')
hold off
The ROC curves clearly demonstrate the superior probability of detection performance for coherent
and noncoherent detectors over the real-valued case.
The rocsnr function accepts an SNR vector input letting you quickly examine a number of ROC
curves.
SNRs = (6:2:12);
rocsnr(SNRs,'SignalType','NonfluctuatingNoncoherent')
8-7
8 Detection
The graph shows that, as the SNR increases, the supports of the probability distributions under the
null and alternative hypotheses become more disjoint. Therefore, for a given false-alarm probability,
the probability of detection increases.
You can examine the probability of detection as a function of SNR for a fixed false-alarm probability
with rocpfa. To obtain ROC curves for a Swerling I target model at false-alarm probabilities of
(1e-6,1e-4,1e-2,1e-1), use
8-8
Receiver Operating Characteristics
% Use rocpfa to examine the effect of SNR on the probability of detection for a detector using
noncoherent integration with a false-alarm probability of 1e-4. Assume the target has a
nonfluctuating RCS and that you are integrating over 5 pulses.
[Pd,SNR] = rocpfa(1e-4,...
'SignalType','NonfluctuatingNoncoherent',...
'NumPulses',5);
figure;
plot(SNR,Pd); xlabel('SNR (dB)');
ylabel('Probability of Detection'); grid on;
title('Nonfluctuating Noncoherent Detector (5 Pulses)');
8-9
8 Detection
See Also
Related Examples
• Detector Performance Analysis using ROC Curves
8-10
Monte-Carlo ROC Simulation
A ROC curve plots Pd as a function of Pfa. The shape of a ROC curve depends on the received SNR of
the signal. If the arriving signal SNR is known, then the ROC curve shows how well the system
performs in terms of Pd and Pfa. If you specify Pd and Pfa, then you can determine how much power
is needed to achieve this requirement.
You can use the function rocsnr to compute theoretical ROC curves. This example shows a ROC
curve generated by a Monte-Carlo simulation of a single-antenna radar system and compares that
curve with a theoretical curve.
−6
Set the desired probability of detection to be 0.9 and the probability of false alarm to be 10 . Set the
maximum range of the radar to 4000 meters and the range resolution to 50 meters. Set the actual
target range to 3000 meters. Set the target radar cross-section to 1.5 square meters and set the
operating frequency to 10 GHz. All computations are performed in baseband.
c = physconst('LightSpeed');
pd = 0.9;
pfa = 1e-6;
max_range = 4000;
target_range = 3000.0;
range_res = 50;
tgt_rcs = 1.5;
fc = 10e9;
lambda = c/fc;
Any simulation that computes Pfa and pd requires processing of many signals. To keep memory
requirements low, process the signals in chunks of pulses. Set the number of pulses to process to
45000 and set the size of each chunk to 10000.
Npulse = 45000;
Npulsebuffsize = 10000;
Calculate the waveform pulse bandwidth using the pulse range resolution. Calculate the pulse
repetition frequency from the maximum range. Because the signal is baseband, set the sampling
frequency to twice the bandwidth. Calculate the pulse duration from the pulse bandwidth.
pulse_bw = c/(2*range_res);
prf = c/(2*max_range);
8-11
8 Detection
fs = 2*pulse_bw;
pulse_duration = 10/pulse_bw;
waveform = phased.LinearFMWaveform('PulseWidth',pulse_duration,...
'SampleRate',fs,'SweepBandwidth',...
pulse_bw,'PRF',prf);
Achieving a particular Pd and Pfa requires that sufficient signal power arrive at the receiver after the
target reflects the signal. Compute the minimum SNR needed to achieve the specified probability of
false alarm and probability of detection by using the Albersheim equation.
snr_min = albersheim(pd,pfa);
To to achieve this SNR, sufficient power must be transmitted to the target. Use the radareqpow
function to estimate the peak transmit power required to achieve the specified SNR in dB for the
target at a range of 3000 meters. The received signal also depends on the target radar cross-section
(RCS). which is assumed to follow a nonfluctuating model (Swerling 0). Set the radar to have
identical transmit and receive gains of 20 dB.
txrx_gain = 20;
peak_power = radareqpow(lambda,target_range,...
snr_min,pulse_duration,'RCS',tgt_rcs,...
'Gain',txrx_gain,'Ts',290.0);
Create System Objects that make up the transmission part of the simulation: radar platform, antenna,
transmitter, and radiator.
antennaplatform = phased.Platform(...
'InitialPosition',[0; 0; 0],...
'Velocity',[0; 0; 0]);
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[5e9 15e9]);
transmitter = phased.Transmitter(...
'Gain',txrx_gain,...
'PeakPower',peak_power,...
'InUseOutputPort',true);
radiator = phased.Radiator(...
'Sensor',antenna,...
'OperatingFrequency',fc);
Create a target System Object corresponding to an actual reflecting target with a non-zero target
cross-section. Reflections from this target will simulate actual radar returns. In order to compute
false alarms, create a second target System Object with zero radar cross section. Reflections from
this target are zero except for noise.
target{1} = phased.RadarTarget(...
'MeanRCS',tgt_rcs,...
'OperatingFrequency',fc);
targetplatform{1} = phased.Platform(...
'InitialPosition',[target_range; 0; 0]);
target{2} = phased.RadarTarget(...
'MeanRCS',0,...
'OperatingFrequency',fc);
targetplatform{2} = phased.Platform(...
'InitialPosition',[target_range; 0; 0]);
8-12
Monte-Carlo ROC Simulation
Model the propagation environment from the radar to the targets and back.
channel{1} = phased.FreeSpace(...
'SampleRate',fs,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
channel{2} = phased.FreeSpace(...
'SampleRate',fs,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
Specified the noise by setting the NoiseMethod property to 'Noise temperature' and the
ReferenceTemperature property to 290 K.
collector = phased.Collector(...
'Sensor',antenna,...
'OperatingFrequency',fc);
receiver = phased.ReceiverPreamp(...
'Gain',txrx_gain,...
'NoiseMethod','Noise temperature',...
'ReferenceTemperature',290.0,...
'NoiseFigure',0,...
'SampleRate',fs,...
'EnableInputPort',true);
receiver.SeedSource = 'Property';
receiver.Seed = 2010;
The fast-time grid is the set of time samples within one pulse repetition time interval. Each sample
corresponds to a range bin.
fast_time_grid = unigrid(0,1/fs,1/prf,'[)');
rangebins = c*fast_time_grid/2;
wavfrm = waveform();
[sigtrans,tx_status] = transmitter(wavfrm);
Create matched filter coefficients from the waveform System object. Then create the matched filter
System object™.
MFCoeff = getMatchedFilter(waveform);
matchingdelay = size(MFCoeff,1) - 1;
filter = phased.MatchedFilter(...
'Coefficients',MFCoeff,...
'GainOutputPort',false);
8-13
8 Detection
Compute the target range, and then compute the index into the range bin array. Because the target
and radar are stationary, use the same values of position and velocity throughout the simulation loop.
You can assume that the range bin index is constant for the entire simulation.
ant_pos = antennaplatform.InitialPosition;
ant_vel = antennaplatform.Velocity;
tgt_pos = targetplatform{1}.InitialPosition;
tgt_vel = targetplatform{1}.Velocity;
[tgt_rng,tgt_ang] = rangeangle(tgt_pos,ant_pos);
rangeidx = val2ind(tgt_rng,rangebins(2)-rangebins(1),rangebins(1));
Create a signal processing loop. Each step is accomplished by executing the System objects. The loop
processes the pulses twice, once for the target-present condition and once for target-absent
condition.
rcv_pulses = zeros(length(sigtrans),Npulsebuffsize);
h1 = zeros(Npulse,1);
h0 = zeros(Npulse,1);
Nbuff = floor(Npulse/Npulsebuffsize);
Nrem = Npulse - Nbuff*Npulsebuffsize;
for n = 1:2 % H1 and H0 Hypothesis
trsig = radiator(sigtrans,tgt_ang);
trsig = channel{n}(trsig,...
ant_pos,tgt_pos,...
ant_vel,tgt_vel);
rcvsig = target{n}(trsig);
rcvsig = collector(rcvsig,tgt_ang);
for k = 1:Nbuff
for m = 1:Npulsebuffsize
rcv_pulses(:,m) = receiver(rcvsig,~(tx_status>0));
end
rcv_pulses = filter(rcv_pulses);
rcv_pulses = buffer(rcv_pulses(matchingdelay+1:end),size(rcv_pulses,1));
if n == 1
h1((1:Npulsebuffsize) + (k-1)*Npulsebuffsize) = rcv_pulses(rangeidx,:).';
else
h0((1:Npulsebuffsize) + (k-1)*Npulsebuffsize) = rcv_pulses(rangeidx,:).';
end
end
if (Nrem > 0)
for m = 1:Nrem
8-14
Monte-Carlo ROC Simulation
rcv_pulses(:,m) = receiver(rcvsig,~(tx_status>0));
end
rcv_pulses = filter(rcv_pulses);
rcv_pulses = buffer(rcv_pulses(matchingdelay+1:end),size(rcv_pulses,1));
if n == 1
h1((1:Nrem) + Nbuff*Npulsebuffsize) = rcv_pulses(rangeidx,1:Nrem).';
else
h0((1:Nrem) + Nbuff*Npulsebuffsize) = rcv_pulses(rangeidx,1:Nrem).';
end
end
end
Compute histograms of the target-present and target-absent returns. Use 100 bins to give a rough
estimate of the spread of signal values. Set the range of histogram values from the smallest signal to
the largest signal.
h1a = abs(h1);
h0a = abs(h0);
thresh_low = min([h1a;h0a]);
thresh_hi = max([h1a;h0a]);
nbins = 100;
binedges = linspace(thresh_low,thresh_hi,nbins);
figure
histogram(h0a,binedges)
hold on
histogram(h1a,binedges)
hold off
title('Target-Absent Vs Target-Present Histograms')
legend('Target Absent','Target Present')
8-15
8 Detection
To compute Pd and Pfa, calculate the number of instances that a target-absent return and a target-
present return exceed a given threshold. This set of thresholds has a finer granularity than the bins
used to create the histogram in the previous simulation. Then, normalize these counts by the number
of pulses to get an estimate of the probabilities. The vector sim_pfa is the simulated probability of
false alarm as a function of the threshold, thresh. The vector sim_pd is the simulated probability of
detection, also a function of the threshold. The receiver sets the threshold so that it can determine
whether a target is present or absent. The histogram above suggests that the best threshold is
around 1.8.
nbins = 1000;
thresh_steps = linspace(thresh_low,thresh_hi,nbins);
sim_pd = zeros(1,nbins);
sim_pfa = zeros(1,nbins);
for k = 1:nbins
thresh = thresh_steps(k);
sim_pd(k) = sum(h1a >= thresh);
sim_pfa(k) = sum(h0a >= thresh);
end
sim_pd = sim_pd/Npulse;
sim_pfa = sim_pfa/Npulse;
To plot the experimental ROC curve, you must invert the Pfa curve so that you can plot Pd against
Pfa. You can invert the Pfa curve only when you can express Pfa as a strictly monotonic decreasing
function of thresh. To express Pfa this way, find all array indices where the Pfa is the constant over
neighboring indices. Then, remove these values from the Pd and Pfa arrays.
8-16
Monte-Carlo ROC Simulation
pfa_diff = diff(sim_pfa);
idx = (pfa_diff == 0);
sim_pfa(idx) = [];
sim_pd(idx) = [];
−6
Limit the smallest Pfa to 10 .
minpfa = 1e-6;
N = sum(sim_pfa >= minpfa);
sim_pfa = fliplr(sim_pfa(1:N)).';
sim_pd = fliplr(sim_pd(1:N)).';
Compute the theoretical Pfa and Pd values from the smallest Pfa to 1. Then plot the theoretical Pfa
curve.
[theor_pd,theor_pfa] = rocsnr(snr_min,'SignalType',...
'NonfluctuatingNoncoherent',...
'MinPfa',minpfa,'NumPoints',N,'NumPulses',1);
semilogx(theor_pfa,theor_pd)
hold on
semilogx(sim_pfa,sim_pd,'r.')
title('Simulated and Theoretical ROC Curves')
xlabel('Pfa')
ylabel('Pd')
grid on
legend('Theoretical','Simulated','Location','SE')
8-17
8 Detection
In the preceding simulation, Pd values at low Pfa do not fall along a smooth curve and do not even
extend down to the specified operating regime. The reason for this is that at very low Pfa levels, very
few, if any, samples exceed the threshold. To generate curves at low Pfa, you must use a number of
samples on the order of the inverse of Pfa. This type of simulation takes a long time. The following
curve uses one million pulses instead of 45,000. To run this simulation, repeat the example, but set
Npulse to 1000000.
8-18
Matched Filtering
Matched Filtering
In this section...
“Reasons for Using Matched Filtering” on page 8-19
“Support for Matched Filtering” on page 8-19
“Matched Filtering of Linear FM Waveform” on page 8-19
“Matched Filtering to Improve SNR for Target Detection” on page 8-21
When you use phased.MatchedFilter, you can customize characteristics of the matched filter
such as the matched filter coefficients and window for spectrum weighting. If you apply spectrum
weighting, you can specify the coverage region and coefficient sample rate; Taylor, Chebyshev, and
Kaiser windows have additional properties you can specify.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create a linear FM waveform with a duration of 0.1 milliseconds, a sweep bandwidth of 100 kHz, and
a pulse repetition frequency of 5 kHz. Add noise to the linear FM pulse and filter the noisy signal
using a matched filter. This example applies a matched filter with and without spectrum weighting.
Create a matched filter with no spectrum weighting, and a matched filter that uses a Taylor window
for spectrum weighting.
filter = phased.MatchedFilter('Coefficients',wav);
taylorfilter = phased.MatchedFilter('Coefficients',wav,...
'SpectrumWindow','Taylor');
8-19
8 Detection
8-20
Matched Filtering
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Place an isotropic antenna element at the global origin (0;0;0). Then, place a target with a
nonfluctuating RCS of 1 square meter approximately 7 km from the transmitter at (5000;5000;10).
Set the operating (carrier) frequency to 10 GHz. To simulate a monostatic radar, set the
InUseOutputPort property on the transmitter to true. Calculate the range and angle from the
transmitter to the target.
8-21
8 Detection
Create a rectangular pulse waveform 25 μs in duration with a PRF of 10 kHz. Use a single pulse for
this example. Determine the maximum unambiguous range for the given PRF. Use the radareqpow
function to determine the peak power required to detect a target. This target has an RCS of 1 square
meter at the maximum unambiguous range for the transmitter operating frequency and gain. The
SNR is based on a desired false-alarm rate of 1e-6 for a noncoherent detector.
waveform = phased.RectangularWaveform('PulseWidth',25e-6,...
'OutputFormat','Pulses','PRF',10e3,'NumPulses',1);
c = physconst('LightSpeed');
maxrange = c/(2*waveform.PRF);
SNR = npwgnthresh(1e-6,1,'noncoherent');
Pt = radareqpow(c/fc,maxrange,SNR,...
waveform.PulseWidth,'RCS',target.MeanRCS,'Gain',transmitter.Gain);
Set the peak transmit power to the output value from radareqpow.
transmitter.PeakPower = Pt;
Create radiator and collector objects that operate at 10 GHz. Create a free space path for the
propagation of the pulse to and from the target. Then, create a receiver and a matched filter for the
rectangular waveform.
radiator = phased.Radiator('PropagationSpeed',c,...
'OperatingFrequency',fc,'Sensor',antenna);
channel = phased.FreeSpace('PropagationSpeed',c,...
'OperatingFrequency',fc,'TwoWayPropagation',false);
collector = phased.Collector('PropagationSpeed',c,...
'OperatingFrequency',fc,'Sensor',antenna);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
'EnableInputPort',true,'SeedSource','Property','Seed',2e3);
filter = phased.MatchedFilter(...
'Coefficients',getMatchedFilter(waveform),...
'GainOutputPort',true);
After you create all the objects that define your model, you can propagate the pulse to and from the
target. Collect the echo at the receiver, and implement the matched filter to improve the SNR.
Generate waveform.
wf = waveform();
Transmit waveform.
[wf,txstatus] = transmitter(wf);
8-22
Matched Filtering
wf = radiator(wf,tgtang);
wf = channel(wf,txloc,tgtloc,[0;0;0],[0;0;0]);
wf = target(wf);
wf = channel(wf,tgtloc,txloc,[0;0;0],[0;0;0]);
wf = collector(wf,tgtang);
rx_puls = receiver(wf,~txstatus);
[mf_puls,mfgain] = filter(rx_puls);
Gd = length(filter.Coefficients)-1;
mf_puls=[mf_puls(Gd+1:end); mf_puls(1:Gd)];
subplot(2,1,1)
t = unigrid(0,1e-6,1e-4,'[)');
rangegates = c.*t;
rangegates = rangegates/2;
plot(rangegates,abs(rx_puls))
title('Received Pulse')
ylabel('Amplitude')
hold on
plot([tgtrng, tgtrng], [0 max(abs(rx_puls))],'r')
subplot(2,1,2)
plot(rangegates,abs(mf_puls))
title('With Matched Filtering')
xlabel('Meters')
ylabel('Amplitude')
hold on
plot([tgtrng, tgtrng], [0 max(abs(mf_puls))],'r')
hold off
8-23
8 Detection
8-24
Stretch Processing
Stretch Processing
In this section...
“Reasons for Using Stretch Processing” on page 8-25
“Support for Stretch Processing” on page 8-25
“Stretch Processing Procedure” on page 8-25
• periodogram
• psd
• findpeaks
5 Convert each peak frequency to the corresponding range value, using the stretchfreq2rng
function.
8-25
8 Detection
See Also
findpeaks | periodogram | phased.LinearFMWaveform | phased.StretchProcessor |
stretchfreq2rng
Related Examples
• Range Estimation Using Stretch Processing
8-26
FMCW Range Estimation
1 Dechirp — Dechirp the received signal by mixing it with the transmitted signal. If you use the
dechirp function, the transmitted signal is the reference signal.
2 Find beat frequency — From the dechirped signal, extract the beat frequency or pair of beat
frequencies. If the FMCW signal has a sawtooth shape (up-sweep or down-sweep sawtooth
shape), you extract one beat frequency. If the FMCW signal has a triangular sweep, you extract
up-sweep and down-sweep beat frequencies.
Extracting beat frequencies can use a variety of algorithms. For example, you can use the
following features to help you perform this step:
• pwelch or periodogram
• psd
• findpeaks
• rootmusic
• phased.CFARDetector
3 Compute range — Use the beat frequency or frequencies to compute the corresponding range
value. The beat2range function can perform this computation.
While developing your algorithm, you might also perform these auxiliary tasks:
• range2time
• time2range
• range2bw
See Also
beat2range | dechirp | findpeaks | periodogram | phased.FMCWWaveform |
phased.RangeDopplerResponse | pwelch | range2beat | range2bw | range2time |
rdcoupling | rootmusic | time2range
8-27
8 Detection
Related Examples
• “Automotive Adaptive Cruise Control Using FMCW Technology” on page 17-367
8-28
Range-Doppler Response
Range-Doppler Response
In this section...
“Benefits of Producing Range-Doppler Response” on page 8-29
“Support for Range-Doppler Processing” on page 8-29
“Range-Speed Response Pattern of Target” on page 8-30
• See how far away the targets are and how quickly they are approaching or receding.
• Distinguish among targets moving at various speeds at various ranges, in particular:
You can also use the range-Doppler response in nonvisual ways. For example, you can perform peak
detection in the range-Doppler domain and use the information to resolve the range-Doppler coupling
of an FMCW radar system.
This procedure is used typically to produce a range-Doppler response for a pulsed radar system. (In
the special case of linear FM pulses, the procedure in “FMCW Radar Systems” on page 8-30 is an
alternative option.)
1 Create a phased.RangeDopplerResponse object, setting the RangeMethod property to
'Matched Filter'.
2 Customize these characteristics, or accept default values for any of them:
8-29
8 Detection
4 Use plotResponse to plot the range-Doppler response or step to obtain data representing the
range-Doppler response. Include x and matched filter coefficients in your syntax when you call
plotResponse or step.
For examples, see the step reference page or “Range-Speed Response Pattern of Target” on page 8-
30.
This procedure is used typically to produce a range-Doppler response for an FMCW radar system. You
can also use this procedure for a system that uses linear FM pulsed signals. In the case of pulsed
signals, you typically use stretch processing to dechirp the signal.
1 Create a phased.RangeDopplerResponse object, setting the RangeMethod property to
'Dechirp'.
2 Customize these characteristics, or accept default values for any of them:
In the case of an FMCW waveform with a triangle sweep, the sweeps alternate between positive
and negative slopes. However, phased.RangeDopplerResponse is designed to process
consecutive sweeps of the same slope. To apply phased.RangeDopplerResponse for a
triangle-sweep system, use one of the following approaches:
• Specify a positive SweepSlope property value, with x corresponding to upsweeps only. The
true values of Doppler or speed are half of what step returns or plotResponse plots.
• Specify a negative SweepSlope property value, with x corresponding to downsweeps only.
The true values of Doppler or speed are half of what step returns or plotResponse plots.
4 Use plotResponse to plot the range-Doppler response or step to obtain data representing the
range-Doppler response. Include x in the syntax when you call plotResponse or step. If your
data is not already dechirped, also include a reference signal in the syntax.
8-30
Range-Doppler Response
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Place an isotropic antenna element at the global origin (0,0,0). Then, place a target with a
nonfluctuating RCS of 1 square meter at (5000,5000,10), which is approximately 7 km from the
transmitter. Set the operating (carrier) frequency to 10 GHz. To simulate a monostatic radar, set the
InUseOutputPort property on the transmitter to true. Calculate the range and angle from the
transmitter to the target.
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[5e9 15e9]);
transmitter = phased.Transmitter('Gain',20,'InUseOutputPort',true);
fc = 10e9;
target = phased.RadarTarget('Model','Nonfluctuating',...
'MeanRCS',1,'OperatingFrequency',fc);
txloc = [0;0;0];
tgtloc = [5000;5000;10];
antennaplatform = phased.Platform('InitialPosition',txloc);
targetplatform = phased.Platform('InitialPosition',tgtloc);
[tgtrng,tgtang] = rangeangle(targetplatform.InitialPosition,...
antennaplatform.InitialPosition);
Create a rectangular pulse waveform 2μs in duration with a PRF of 10 kHz. Determine the maximum
unambiguous range for the given PRF. Use the radareqpow function to determine the peak power
required to detect a target. This target has an RCS of 1 square meter at the maximum unambiguous
range for the transmitter operating frequency and gain. The SNR is based on a desired false-alarm
rate of for a noncoherent detector.
waveform = phased.RectangularWaveform('PulseWidth',2e-6,...
'OutputFormat','Pulses','PRF',1e4,'NumPulses',1);
c = physconst('LightSpeed');
maxrange = c/(2*waveform.PRF);
SNR = npwgnthresh(1e-6,1,'noncoherent');
Pt = radareqpow(c/fc,maxrange,SNR,...
waveform.PulseWidth,'RCS',target.MeanRCS,'Gain',transmitter.Gain);
Set the peak transmit power to the output value from radareqpow.
transmitter.PeakPower = Pt;
Create radiator and collector objects that operate at 10 GHz. Create a free space path for the
propagation of the pulse to and from the target. Then, create a receiver.
radiator = phased.Radiator(...
'PropagationSpeed',c,...
'OperatingFrequency',fc,'Sensor',antenna);
channel = phased.FreeSpace(...
'PropagationSpeed',c,...
'OperatingFrequency',fc,'TwoWayPropagation',false);
collector = phased.Collector(...
'PropagationSpeed',c,...
'OperatingFrequency',fc,'Sensor',antenna);
receiver = phased.ReceiverPreamp('NoiseFigure',0,...
'EnableInputPort',true,'SeedSource','Property','Seed',2e3);
8-31
8 Detection
Propagate 25 pulses to and from the target. Collect the echoes at the receiver, and store them in a 25-
column matrix named rx_puls.
numPulses = 25;
rx_puls = zeros(100,numPulses);
Simulation loop
for n = 1:numPulses
Generate waveform
wf = waveform();
Transmit waveform
[wf,txstatus] = transmitter(wf);
wf = radiator(wf,tgtang);
wf = channel(wf,txloc,tgtloc,[0;0;0],[0;0;0]);
wf = target(wf);
wf = channel(wf,tgtloc,txloc,[0;0;0],[0;0;0]);
wf = collector(wf,tgtang);
rx_puls(:,n) = receiver(wf,~txstatus);
end
Create a range-Doppler response object that uses the matched filter approach. Configure this object
to show radial speed rather than Doppler frequency. Use plotResponse to plot the range versus
speed.
rangedoppler = phased.RangeDopplerResponse(...
'RangeMethod','Matched Filter',...
'PropagationSpeed',c,...
'DopplerOutput','Speed','OperatingFrequency',fc);
plotResponse(rangedoppler,rx_puls,getMatchedFilter(waveform))
8-32
Range-Doppler Response
See Also
phased.RangeDopplerResponse
Related Examples
• “Automotive Adaptive Cruise Control Using FMCW Technology” on page 17-367
8-33
8 Detection
To motivate the need for an adaptive procedure, assume a simple binary hypothesis test where you
must decide between the signal-absent and signal-present hypotheses for a single sample. The signal
has amplitude 4, and the noise is zero-mean gaussian with unit variance.
First, set the false-alarm rate to 0.001 and determine the threshold.
T = npwgnthresh(1e-3,1,'real');
threshold = sqrt(db2pow(T))
threshold =
3.0902
Check that this threshold yields the desired false-alarm rate probability and then compute the
probability of detection.
pfa = 0.5*erfc(threshold/sqrt(2))
pd = 0.5*erfc((threshold-4)/sqrt(2))
pfa =
1.0000e-03
pd =
0.8185
8-34
Constant False-Alarm Rate (CFAR) Detectors
Next, assume that the noise power increases by 6.02 dB, doubling the noise variance. If your detector
does not adapt to this increase in variance by determining a new threshold, your false-alarm rate
increases significantly.
pfa = 0.5*erfc(threshold/2)
pfa =
0.0144
The following figure shows the effect of increasing the noise variance on the false-alarm probability
for a fixed threshold.
noisevar = 1:0.1:10;
noisepower = 10*log10(noisevar);
pfa = 0.5*erfc(threshold./sqrt(2*noisevar));
semilogy(noisepower,pfa./1e-3)
grid on
title('Increase in P_{FA} due to Noise Variance')
ylabel('Increase in P_{FA} (Orders of Magnitude)')
xlabel('Noise Power Increase (dB)')
8-35
8 Detection
This assumption is key in justifying the use of the training cells to estimate the noise variance in the
CUT. Additionally, the cell-averaging CFAR detector assumes that the training cells do not contain any
signals from targets. Thus, the data in the training cells are assumed to consist of noise only.
• It is preferable to have some buffer, or guard cells, between the CUT and the training cells. The
buffer provided by the guard cells guards against signal leaking into the training cells and
adversely affecting the estimation of the noise variance.
• The training cells should not represent range cells too distant in range from the CUT, as the
following figure illustrates.
8-36
Constant False-Alarm Rate (CFAR) Detectors
The optimum estimator for the noise variance depends on distributional assumptions and the type of
detector. Assume the following:
Note If you denote this RV by Z=U+jV, the squared magnitude |Z|2 follows an exponential
distribution with mean σ2.
If the samples in training cells are the squared magnitudes of such complex Gaussian RVs, you can
use the sample mean as an estimator of the noise variance.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Create a CFAR detector object with two guard cells, 20 training cells, and a false-alarm probability of
0.001. By default, this object assumes a square-law detector with no pulse integration.
detector = phased.CFARDetector('NumGuardCells',2,...
'NumTrainingCells',20,'ProbabilityFalseAlarm',1e-3);
There are 10 training cells and 1 guard cell on each side of the cell under test (CUT). Set the CUT
index to 12.
CUTidx = 12;
8-37
8 Detection
Seed the random number generator for a reproducible set of input data.
rng(1000);
Set the noise variance to 0.25. This value corresponds to an approximate -6 dB SNR. Generate a 23-
by-10000 matrix of complex-valued, white Gaussian rv's with the specified variance. Each row of the
matrix represents 10,000 Monte Carlo trials for a single cell.
Ntrials = 1e4;
variance = 0.25;
Ncells = 23;
inputdata = sqrt(variance/2)*(randn(Ncells,Ntrials)+1j*randn(Ncells,Ntrials));
Because the example implements a square-law detector, take the squared magnitudes of the elements
in the data matrix.
Z = abs(inputdata).^2;
Provide the output from the square-law operator and the index of the cell under test to the CFAR
detector.
Z_detect = detector(Z,CUTidx);
The output, Z_detect, is a logical vector with 10,000 elements. Sum the elements in Z_detect and
divide by the total number of trials to obtain the empirical false-alarm rate.
pfa = sum(Z_detect)/Ntrials
pfa = 0.0013
The empirical false-alarm rate is 0.0013, which corresponds closely to the desired false-alarm rate of
0.001.
8-38
Constant False-Alarm Rate (CFAR) Detectors
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Ntraining = 10;
Nguard = 2;
Pfa_goal = 0.01;
detector = phased.CFARDetector('Method','CA',...
'NumTrainingCells',Ntraining,'NumGuardCells',Nguard,...
'ProbabilityFalseAlarm',Pfa_goal);
The detector has 2 guard cells, 10 training cells, and a false-alarm probability of 0.01. This object
assumes a square-law detector with no pulse integration.
Generate a vector of input data based on a complex-valued white Gaussian random variable.
Ncells = 23;
Ntrials = 100000;
inputdata = 1/sqrt(2)*(randn(Ncells,Ntrials) + ...
1i*randn(Ncells,Ntrials));
In the input data, replace rows 8 and 12 to simulate two targets for the CFAR detector to detect.
inputdata(8,:) = 3*exp(1i*2*pi*rand);
inputdata(12,:) = 9*exp(1i*2*pi*rand);
Because the example implements a square-law detector, take the squared magnitudes of the elements
in the input data vector.
Z = abs(inputdata).^2;
Z_detect = detector(Z,8:12);
The Z_detect matrix has five rows. The first and last rows correspond to the simulated targets. The
three middle rows correspond to noise.
Compute the probability of detection of the two targets. Also, estimate the probability of false alarm
using the noise-only rows.
Pd_1 = sum(Z_detect(1,:))/Ntrials
Pd_1 = 0
Pd_2 = sum(Z_detect(end,:))/Ntrials
Pd_2 = 1
Pfa = max(sum(Z_detect(2:end-1,:),2)/Ntrials)
Pfa = 6.0000e-05
The 0 value of Pd_1 indicates that this detector does not detect the first target.
Change the CFAR detector so it uses the order statistic CFAR algorithm with a rank of 5.
8-39
8 Detection
release(detector);
detector.Method = 'OS';
detector.Rank = 5;
Z_detect = detector(Z,8:12);
Pd_1 = sum(Z_detect(1,:))/Ntrials
Pd_1 = 0.5820
Pd_2 = sum(Z_detect(end,:))/Ntrials
Pd_2 = 1
Pfa = max(sum(Z_detect(2:end-1,:),2)/Ntrials)
Pfa = 0.0066
Using the order statistic algorithm instead of the cell-averaging algorithm, the detector detects the
first target in about 58% of the trials.
8-40
Measure Intensity Levels Using the Intensity Scope
To examine the data, click the Cursor Measurement button in the Toolbar. You see two cursors,
each of which is represented by pairs of cross-hairs. To distinguish cursors, one pair consists of solid
lines and the second pair consists of dashed lines and are tagged with a 1 or a 2.
8-41
8 Detection
Cursor 1 has solid cross-hairs and overlays the intersection of two signal lines. Cursor 2 has dashed
cross-hairs and overlays a signal-free region. The Cursor Measurements pane shows the
coordinates of the cursors in time and range (labelled X) and the intensities at these positions. Cursor
1 is located at a range of 2775 meters and a time of 3.6 seconds. The signal intensity at this point is
1.989e-6 watts. Cursor 2 is located at a range of 3725 meters and a time of 2.9 seconds. The signal
intensity at this point is 3.327e-7 watts. You can move the cursors to any positions of interest and
obtain the intensity values.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Set the probability of detection, probability of false alarm, maximum range, range resolution,
operating frequency, transmitter gain, and target radar cross-section.
pd = 0.9;
pfa = 1e-6;
max_range = 5000;
range_res = 50;
fc = 10e9;
tx_gain = 20;
tgt_rcs = 1;
Choose the signal propagation speed to be the speed of light, and compute the signal wavelength
corresponding to the operating frequency.
8-42
Measure Intensity Levels Using the Intensity Scope
c = physconst('LightSpeed');
lambda = c/fc;
Compute the pulse bandwidth from the range resolution. Set the sampling rate, fs, to twice the pulse
bandwidth. The noise bandwidth is also set to the pulse bandwidth. The radar integrates a number of
pulses set by num_pulse_int. The duration of each pulse is the inverse of the pulse bandwidth.
pulse_bw = c/(2*range_res);
pulse_length = 1/pulse_bw;
fs = 2*pulse_bw;
noise_bw = pulse_bw;
num_pulse_int = 10;
Set the pulse repetition frequency to match the maximum range of the radar.
prf = c/(2*max_range);
Use the Albersheim equation to compute the SNR required to meet the desired probability of
detection and probability of false alarm. Then, use the radar equation to compute the power needed
to achieve the required SNR.
waveform = phased.RectangularWaveform('PulseWidth',pulse_length,...
'PRF',prf,'SampleRate',fs);
amplifier = phased.ReceiverPreamp('Gain',20,'NoiseFigure',0,...
'SampleRate',fs,'EnableInputPort',true,'SeedSource','Property',...
'Seed',2007);
transmitter = phased.Transmitter('Gain',tx_gain,'PeakPower',peak_power,...
'InUseOutputPort',true);
targetplatforms = phased.Platform(...
'InitialPosition',[2000.66 3532.63 3845.04; 0 0 0; 0 0 0], ...
'Velocity',[150 -150 0; 0 0 0; 0 0 0]);
8-43
8 Detection
radiator = phased.Radiator('Sensor',antenna,'OperatingFrequency',fc);
collector = phased.Collector('Sensor',antenna,'OperatingFrequency',fc);
Set up the fast-time grid. Fast time is the sampling time of the echoed pulse relative to the pulse
transmission time. The range bins are the ranges corresponding to each bin of the fast time grid.
fast_time = unigrid(0,1/fs,1/prf,'[)');
range_bins = c*fast_time/2;
To compensate for range loss, create a time varying gain System Object?.
gain = phased.TimeVaryingGain('RangeLoss',2*fspl(range_bins,lambda),...
'ReferenceLoss',2*fspl(max_range,lambda));
Set up Doppler bins. Doppler bins are determined by the pulse repetition frequency. Create an FFT
System object for Doppler processing.
DopplerFFTbins = 32;
DopplerRes = prf/DopplerFFTbins;
fft = dsp.FFT('FFTLengthSource','Property',...
'FFTLength',DopplerFFTbins);
Set up a reduced data cube. Normally, a data cube has fast-time and slow-time dimensions and the
number of sensors. Because the data cube has only one sensor, it is two-dimensional.
rx_pulses = zeros(numel(fast_time),num_pulse_int);
Create two IntensityScope System objects, one for Doppler-time-intensity and the other for range-
time-intensity.
dtiscope = phased.IntensityScope('Name','Doppler-Time Display',...
'XLabel','Velocity (m/sec)', ...
'XResolution',dop2speed(DopplerRes,c/fc)/2, ...
'XOffset',dop2speed(-prf/2,c/fc)/2,...
'TimeResolution',0.05,'TimeSpan',5,'IntensityUnits','Mag');
rtiscope = phased.IntensityScope('Name','Range-Time Display',...
'XLabel','Range (m)', ...
'XResolution',c/(2*fs), ...
'TimeResolution',0.05,'TimeSpan',5,'IntensityUnits','Mag');
8-44
Measure Intensity Levels Using the Intensity Scope
pri = 1/prf;
nsteps = 200;
for k = 1:nsteps
for m = 1:num_pulse_int
[ant_pos,ant_vel] = radarplatform(pri);
[tgt_pos,tgt_vel] = targetplatforms(pri);
sig = waveform();
[s,tx_status] = transmitter(sig);
[~,tgt_ang] = rangeangle(tgt_pos,ant_pos);
tsig = radiator(s,tgt_ang);
tsig = channels(tsig,ant_pos,tgt_pos,ant_vel,tgt_vel);
rsig = targets(tsig);
rsig = collector(rsig,tgt_ang);
rx_pulses(:,m) = amplifier(rsig,~(tx_status>0));
end
rx_pulses = filter(rx_pulses);
MFdelay = size(MFcoef,1) - 1;
rx_pulses = buffer(rx_pulses((MFdelay + 1):end), size(rx_pulses,1));
rx_pulses = gain(rx_pulses);
range = pulsint(rx_pulses,'noncoherent');
rtiscope(range);
dshift = fft(rx_pulses.');
dshift = fftshift(abs(dshift),1);
dtiscope(mean(dshift,2));
radarplatform(.05);
targetplatforms(.05);
end
8-45
8 Detection
8-46
Measure Intensity Levels Using the Intensity Scope
All of the targets lie on the x-axis. Two targets are moving along the x-axis and one is stationary.
Because the radar is at the origin, you can read the target speed directly from the Doppler-Time
Display window. The values agree with the specified velocities of -150, 150, and 0 m/sec.
8-47
9
Consider this object as a point-to-point propagation channel. By setting object properties, you can
customize certain characteristics of the environment and the signals propagating through it,
including:
• Propagation speed and sampling rate of the signal you are propagating
• Signal carrier frequency
• Whether the object models one-way or two-way propagation
Each time you call step on a phased.FreeSpace object, you specify not only the signal to
propagate, but also the location and velocity of the signal origin and destination.
You can use fspl to determine the free space path loss, in decibels, for a given distance and
wavelength.
tgtrng = 2*tgtrng;
lambda = physconst('LightSpeed')/1e9;
L = fspl(tgtrng,lambda)
L = 104.7524
9-2
Free Space Path Loss
Loss = pow2db((4*pi*tgtrng/lambda)^2)
Loss = 104.7524
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
waveform = phased.LinearFMWaveform('SweepBandwidth',1e5,...
'PulseWidth',5e-5,'OutputFormat','Pulses',...
'NumPulses',1,'SampleRate',1e6,'PRF',1e4);
signal = waveform();
channel = phased.FreeSpace('SampleRate',1e6,...
'TwoWayPropagation',true,'OperatingFrequency',1e9);
y = channel(signal,[1000; 250; 10],[3000; 750; 20],[0;0;0],[0;0;0]);
Plot the magnitude of the transmitted and received pulse to show the amplitude loss and time delay.
t = unigrid(0,1/waveform.SampleRate,1/waveform.PRF,'[)');
subplot(2,1,1)
plot(t.*1e6,abs(signal))
title('Magnitude of Transmitted Pulse')
xlabel('Time (microseconds)')
ylabel('Magnitude')
subplot(2,1,2)
plot(t.*1e6,abs(y))
title('Magnitude of Received Pulse')
xlabel('Time (microseconds)')
ylabel('Magnitude')
9-3
9 Environment and Target Models
The delay in the received pulse is approximately 14 μs, the expected value for a distance of 4.123 km.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
The following code constructs the required System objects and calculates the range and angle from
the antenna to the target.
waveform = phased.LinearFMWaveform('SweepBandwidth',1e5,...
'PulseWidth',5e-5,'OutputFormat','Pulses',...
'NumPulses',1,'SampleRate',1e6);
antenna = phased.IsotropicAntennaElement('FrequencyRange',[500e6 1.5e9]);
transmitter = phased.Transmitter('PeakPower',1e3,'Gain',20);
radiator = phased.Radiator('Sensor',antenna,'OperatingFrequency',1e9);
channel = phased.FreeSpace('SampleRate',1e6,...
'TwoWayPropagation',true,'OperatingFrequency',1e9);
9-4
Free Space Path Loss
target = phased.RadarTarget('MeanRCS',1,'Model','Nonfluctuating');
collector = phased.Collector('Sensor',antenna,'OperatingFrequency',1e9);
sensorpos = [3000;750;20];
tgtpos = [1000;250;10];
[tgtrng,tgtang] = rangeangle(sensorpos,tgtpos);
Because the TwoWayPropagation property is set to true, you compute the total propagation only
once.
pulse = waveform();
pulse = transmitter(pulse);
pulse = radiator(pulse,tgtang);
pulse = channel(pulse,sensorpos,tgtpos,[0;0;0],[0;0;0]);
pulse = target(pulse);
sig = collector(pulse,tgtang);
Alternatively, you can break up the two-way propagation into two separate one-way propagation
paths. You do so by setting the TwoWayPropagation property to false.
channel1 = phased.FreeSpace('SampleRate',1e9,...
'TwoWayPropagation',false,'OperatingFrequency',1e6);
pulse = waveform();
pulse = transmitter(pulse);
pulse = radiator(pulse,tgtang);
pulse = channel1(pulse,sensorpos,tgtpos,[0;0;0],[0;0;0]);
pulse = target(pulse);
pulse = channel(pulse,tgtpos,sensorpos,[0;0;0],[0;0;0]);
sig = collector(pulse,tgtang);
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Define the signal sample rate, propagation speed, and carrier frequency. Define the signal as a
sinusoid of frequency 150 Hz. Set the sample rate to 1 kHz and the carrier frequency to 300 MHz.
The propagation speed is the speed of light.
9-5
9 Environment and Target Models
fs = 1.0e3;
c = physconst('Lightspeed');
fc = 300e3;
f = 150.0;
N = 1024;
t = (0:N-1)'/fs;
x = exp(1i*2*pi*f*t);
Assume the target is approaching the radar at 300.0 m/s, and the radar is stationary. Find the
Doppler shift that corresponds to this relative speed.
v = 1000.0;
dop = speed2dop(v,c/fc)
dop = 1.0007
Create a phased.FreeSpace System object™, and use it to propagate the signal from the radar to
the target. Assume the radar is at (0, 0, 0) and the target is at (100, 0, 0).
channel = phased.FreeSpace('SampleRate',fs,...
'PropagationSpeed',c,'OperatingFrequency',fc);
origin_pos = [0;0;0];
dest_pos = [100;0;0];
origin_vel = [0;0;0];
dest_vel = [-v;0;0];
y = channel(x,origin_pos,dest_pos,origin_vel,dest_vel);
Plot the spectrum of the transmitted signal. The peak at 150 Hz reflects the frequency of the signal.
window = 64;
ovlp = 32;
[Pxx,F] = pwelch(x,window,ovlp,N,fs);
plot(F,10*log10(Pxx))
xlabel('Frequency (Hz)')
ylabel('Power/Frequency (dB/Hz)')
title('Transmitted Signal')
9-6
Free Space Path Loss
Plot the spectrum of the propagated signal. The peak at 250 Hz reflects the frequency of the signal
plus the Doppler shift of 100 Hz.
window = 64;
ovlp = 32;
[Pyy,F] = pwelch(y,window,ovlp,N,fs);
plot(F,10*log10(Pyy))
grid
xlabel('Frequency (Hz)')
ylabel('Power/Frequency (dB/Hz)')
title('Propagated Signal')
9-7
9 Environment and Target Models
The doppler shift is too small to see. Overlay the two spectra in the region of 150 Hz.
figure
idx = find(F>=130 & F<=170);
plot(F(idx),10*log10(Pxx(idx)),'b')
grid
hold on
plot(F(idx),10*log10(Pyy(idx)),'r')
hold off
xlabel('Frequency (Hz)')
ylabel('Power/Frequency (dB/Hz)')
title('Transmitted and Propagated Signals')
legend('Transmitted','Propagated')
9-8
Free Space Path Loss
9-9
9 Environment and Target Models
The figure illustrates two propagation paths. From the source position, ss, and the receiver position,
sr, you can compute the arrival angles of both paths, θ′los and θ′rp. The arrival angles are the elevation
and azimuth angles of the arriving radiation with respect to a local coordinate system. In this case,
the local coordinate system coincides with the global coordinate system. You can also compute the
transmitting angles, θlos and θrp. In the global coordinates, the angle of reflection at the boundary is
the same as the angles θrp and θ′rp. The reflection angle is important to know when you use angle-
dependent reflection-loss data. You can determine the reflection angle by using the rangeangle
function and setting the reference axes to the global coordinate system. The total path length for the
line-of-sight path is shown in the figure by Rlos which is equal to the geometric distance between
source and receiver. The total path length for the reflected path is Rrp= R1 + R2. The quantity L is the
ground range between source and receiver.
9-10
Two-Ray Multipath Propagation
You can easily derive exact formulas for path lengths and angles in terms of the ground range and
object heights in the global coordinate system.
R = xs−xr
2
Rlos = R = zr − zs + L2
zr 2
R1 = zr + zs + L2
zr + zz
zs 2
R2 = zr + zs + L2
zs + zr
2
Rrp = R1 + R2 = zr + zs + L2
zs − zr
tanθlos =
L
zs + zr
tanθrp = −
L
θ′los = − θlos
θ′rp = θrp
9-11
9 Environment and Target Models
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
c = 1500;
fc = 100e3;
fs = 100e3;
relfreqs = [-25000,0,25000];
Set up a stationary radar and moving target and compute the expected Doppler.
rpos = [0;0;0];
rvel = [0;0;0];
tpos = [30/fs*c; 0;0];
tvel = [45;0;0];
dop = -tvel(1)./(c./(relfreqs + fc));
t = (0:199)/fs;
x = sum(exp(1i*2*pi*t.'*relfreqs),2);
channel = phased.WidebandFreeSpace(...
'PropagationSpeed',c,...
'OperatingFrequency',fc,...
'SampleRate',fs);
y = channel(x,rpos,tpos,rvel,tvel);
Plot the spectra of the original signal and the Doppler-shifted signal.
periodogram([x y],rectwin(size(x,1)),1024,fs,'centered')
ylim([-150 0])
legend('original','propagated');
9-12
Free-Space Propagation of Wideband Signals
For this wideband signal, you can see that the magnitude of the Doppler shift increases with
frequency. In contrast, for narrowband signals, the Doppler shift is assumed constant over the band.
9-13
9 Environment and Target Models
Radar Target
Radar Target Properties
The phased.RadarTarget System object models a reflected signal from a target. The target may
have a nonfluctuating or fluctuating radar cross section (RCS). This object has the following
modifiable properties:
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
sigma = 1.0;
target = phased.RadarTarget('Model','nonfluctuating','MeanRCS',sigma,...
'PropagationSpeed',physconst('LightSpeed'),'OperatingFrequency',1e9);
For a nonfluctuation target, the reflected waveform equals the incident waveform scaled by the gain
4πσ
G= 2
λ
Here, σ represents the mean target RCS, and λ is the wavelength of the operating frequency.
Set the signal incident on the target to be a vector of ones to obtain the gain factor used by the
phased.RadarTarget System object™.
x = ones(10,1);
y = target(x)
y = 10×1
11.8245
11.8245
11.8245
11.8245
11.8245
11.8245
11.8245
9-14
Radar Target
11.8245
11.8245
11.8245
Compute the gain from the formula to verify that the output of the System object equals the
theoretical value.
lambda = target.PropagationSpeed/target.OperatingFrequency;
G = sqrt(4*pi*sigma/lambda^2)
G = 11.8245
• Several small randomly distributed reflectors with no dominant reflector — This target, at
close range or when the radar uses pulse-to-pulse frequency agility, can exhibit large magnitude
rapid (pulse-to-pulse) fluctuations in the RCS. That same complex reflector at long range with no
frequency agility can exhibit large magnitude fluctuations in the RCS over a longer time scale
(scan-to-scan).
• Dominant reflector along with several small reflectors — The reflectors in this target can
exhibit small magnitude fluctuations on pulse-to-pulse or scan-to-scan time scales, subject to:
To account for significant fluctuations in the RCS, you need to use statistical models. The four
Swerling models, described in the following table, are widely used to cover these kinds of fluctuating-
RCS cases.
You can simulate a Swerling target model by setting the Model property. Use the step method and
set the UPDATERCS input argument to true or false. Setting UPDATERCS to true updates the RCS
9-15
9 Environment and Target Models
value according to the specified probability model each time you call step. If you set UPDATERCS to
false, the previous RCS value is used.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
antennapos = phased.Platform('InitialPosition',[0;0;0]);
targetpos = phased.Platform('InitialPosition',[1000; 1000; 0]);
waveform = phased.LinearFMWaveform('PulseWidth',100e-6);
transmitter = phased.Transmitter('PeakPower',1e3,'Gain',40);
radiator = phased.Radiator('OperatingFrequency',1e9, ...
'Sensor',antenna);
channel = phased.FreeSpace('OperatingFrequency',1e9,...
'TwoWayPropagation',true);
target = phased.RadarTarget('MeanRCS',1,'OperatingFrequency',1e9);
collector = phased.Collector('OperatingFrequency',1e9,...
'Sensor',antenna);
wav = waveform();
txwav = transmitter(wav);
radwav = radiator(txwav,[0 0]');
propwav = channel(radwav,antennapos.InitialPosition,...
targetpos.InitialPosition,[0;0;0],[0;0;0]);
reflwav = target(propwav);
collwav = collector(reflwav,[45 0]');
9-16
Swerling 1 Target Models
For Swerling 1 and Swerling 2 target models, the total RCS arises from many independent small
scatterers of approximately equal individual RCS. The total RCS may vary with every pulse in a scan
(Swerling 2) or may be constant over a complete scan consisting of multiple pulses (Swerling 1). In
either case, the statistics obey a chi-squared probability density function with two degrees of
freedom.
For simplicity, start with a rotating radar having a rotation time of 5 seconds corresponding to a
rotation rate or scan rate of 72 degrees/sec.
Trot = 5.0;
rotrate = 360/Trot;
The radar has a main half-power beam width (HPBW) of 3.0 degrees. During the time that a target is
illuminated by the main beam, radar pulses strike the target and reflect back to the radar. The time
period during which the target is illuminated is called the dwell time. This time period is also called a
scan. The example will process 3 scans of the target.
HPBW = 3.0;
Tdwell = HPBW/rotrate;
Nscan = 3;
The number of pulses that arrive on target during the dwell time depends upon the pulse repetition
frequency (PRF). PRF is the inverse of the pulse repetition interval (PRI). Assume 5000 pulses are
transmitted per second.
prf = 5000.0;
pri = 1/prf;
You create a Swerling 1 target by properly employing the step method of the RadarTarget System
object™. To effect a Swerling 1 model, set the Model property of the phased.RadarTarget System
object™ to either 'Swerling1' or 'Swerling2'. Both are equivalent. Then, at the first call to the
step method at the beginning of the scan, set the updatercs argument to true. Set updatercs to
false for the remaining calls to step during the scan. This means that the radar cross section is
only updated at the beginning of a scan and remains constant for the remainder of the scan.
Set up the radiating antenna. Assume the operating frequency of the antenna is 1GHz.
9-17
9 Environment and Target Models
fc = 1e9;
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
radiator = phased.Radiator('OperatingFrequency',fc, ...
'Sensor',antenna);
radarplatform = phased.Platform('InitialPosition',[0;0;0]);
The transmitted signal is a linear FM waveform. Transmit one pulse per call to the step method.
waveform = phased.LinearFMWaveform('PulseWidth',50e-6,...
'OutputFormat','Pulses','NumPulses',1);
transmitter = phased.Transmitter('PeakPower',1000.0,'Gain',40);
channel = phased.FreeSpace('OperatingFrequency',fc,'TwoWayPropagation',true);
Specify the radar target to have a mean RCS of 1 m2 and be of the Swerling model type 1 or 2. You
can use Swerling 1 or 2 interchangeably.
target = phased.RadarTarget('MeanRCS',1,'OperatingFrequency',fc,...
'Model',tgtmodel);
collector = phased.Collector('OperatingFrequency',1e9,...
'Sensor',antenna);
wav = step(waveform);
filter = phased.MatchedFilter('Coefficients',getMatchedFilter(waveform));
z = zeros(Nscan,Np);
tp = zeros(Nscan,Np);
9-18
Swerling 1 Target Models
Enter the loop. Set updatercs to true only for the first pulse of the scan.
for m = 1:Nscan
t0 = (m-1)*Trot;
t = t0;
for k = 1:Np
if k == 1
updatercs = true;
else
updatercs = false;
end
t = t + pri;
txwav = transmitter(wav);
[xradar,vradar] = radarplatform(t);
[xtgt,vtgt] = targetplatform(t);
[~,ang] = rangeangle(xtgt,xradar);
radwav = radiator(txwav,ang);
propwav = channel(radwav,xradar,xtgt,vradar,vtgt);
reflwav = target(propwav,updatercs);
collwav = collector(reflwav,ang);
y = filter(collwav);
z(m,k) = max(abs(y));
tp(m,k) = t;
end
end
Plot the amplitudes of the pulses for the scan as a function of time.
plot(tp(:),z(:),'.')
xlabel('Time (sec)')
ylabel('Pulse Amplitudes')
9-19
9 Environment and Target Models
9-20
Swerling Target Models
In Swerling 1 and Swerling 2 target models, the total RCS arises from many independent small
scatterers of approximately equal individual RCS. The total RCS may vary with every pulse in a scan
(Swerling 2) or may be constant over a complete scan consisting of multiple pulses (Swerling 1). In
either case, the statistics obey a chi-squared probability density function with two degrees of
freedom.
For simplicity, start with a rotating radar having a rotation time of 5 seconds corresponding to a
rotation or scan rate of 72 degrees/sec.
Trot = 5.0;
scanrate = 360/Trot;
The radar has a main half-power beam width (HPBW) of 3.0 degrees. During the time that a target is
illuminated by the main beam, radar pulses strike the target and reflect back to the radar. The time
period during which the target is illuminated is called the dwell time. This time is also called a scan.
The radar will process 3 scans of the target.
HPBW = 3.0;
Tdwell = HPBW/scanrate;
Nscan = 3;
The number of pulses that arrive on target during the dwell time depends upon the pulse repetition
frequency (PRF). PRF is the inverse of the pulse repetition interval (PRI). Assume 5000 pulses are
transmitted per second.
prf = 5000.0;
pri = 1/prf;
You create a Swerling 2 target by properly employing the step method of the RadarTarget System
object™. To effect a Swerling 2 model, set the Model property of the phased.RadarTarget System
object™ to either 'Swerling1' or 'Swerling2'. Both are equivalent. Then, at the every call to the
step method, set the updatercs argument to true. This means that the radar cross-section is
updated at every pulse.
Set up the radiating antenna. Assume the operating frequency of the antenna is 1GHz.
9-21
9 Environment and Target Models
fc = 1e9;
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
radiator = phased.Radiator('OperatingFrequency',fc,'Sensor',antenna);
The transmitted signal is a linear FM waveform. Transmit one pulse per call to the step method.
waveform = phased.LinearFMWaveform('PulseWidth',50e-6,...
'OutputFormat','Pulses','NumPulses',1);
Specify the radar target to have a mean RCS of 1 m2 and be of the Swerling model type 1 or 2. You
can use Swerling 1 or 2 interchangeably.
target = phased.RadarTarget('MeanRCS',1,'OperatingFrequency',fc,...
'Model',tgtmodel);
9-22
Swerling Target Models
Enter the loop. Set updatercs to true only for the first pulse of the scan.
for m = 1:Nscan
t0 = (m-1)*Trot;
t = t0;
updatercs = true;
for k = 1:Np
t = t + pri;
txwav = transmitter(wav);
[xradar,vradar] = radarplatform(t);
[xtgt,vtgt] = targetplatform(t);
[~,ang] = rangeangle(xtgt,xradar);
radwav = radiator(txwav,ang);
propwav = channel(radwav,radarplatform.InitialPosition,...
targetplatform.InitialPosition,[0;0;0],[0;0;0]);
reflwav = target(propwav,updatercs);
collwav = collector(reflwav,ang);
y = filter(collwav);
z(m,k) = max(abs(y));
tp(m,k) = t;
end
end
Plot the amplitudes of the pulses for the scan as a function of time.
plot(tp(:),z(:),'.')
xlabel('Time (sec)')
ylabel('Pulse Amplitude')
9-23
9 Environment and Target Models
figure;
hist(z(:),25)
xlabel('Pulse Amplitude')
ylabel('Count')
9-24
Swerling Target Models
9-25
9 Environment and Target Models
In Swerling 3 and Swerling 4 target models, the total RCS arises from a target consisting of one large
scattering surface with several other small scattering surfaces. The total RCS may vary with every
pulse in a scan (Swerling 4) or may be constant over a complete scan consisting of multiple pulses
(Swerling 3). In either case, the statistics obey a chi-squared probability density function with four
degrees of freedom.
For simplicity, start with a rotating radar having a rotation time of 5 seconds corresponding to a
rotation or scan rate of 72 degrees/sec.
Trot = 5.0;
scanrate = 360/Trot;
The radar has a main half-power beam width (HPBW) of 3.0 degrees. During the time that a target is
illuminated by the main beam, radar pulses strike the target and reflect back to the radar. The time
period during which the target is illuminated is called the dwell time. This time is also called a scan.
The radar will process 3 scans of the target.
HPBW = 3.0;
Tdwell = HPBW/scanrate;
Nscan = 3;
The number of pulses that arrive on target during the dwell time depends upon the pulse repetition
frequency (PRF). PRF is the inverse of the pulse repetition interval (PRI). Assume 5000 pulses are
transmitted per second.
prf = 5000.0;
pri = 1/prf;
You create a Swerling 3 target by properly employing the step method of the RadarTarget System
object™. To effect a Swerling 3 model, set the Model property of the phased.RadarTarget System
object™ to either 'Swerling3' or 'Swerling4'. Both are equivalent. Then, at the first call to the
step method at the beginning of the scan, set the updatercs argument to true. Set updatercs to
false for the remaining calls to step during the scan. This means that the radar cross section is
only updated at the beginning of a scan and remains constant for the remainder of the scan.
Set up the radiating antenna. Assume the operating frequency of the antenna is 1GHz.
9-26
Swerling 3 Target Models
fc = 1e9;
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
radiator = phased.Radiator('OperatingFrequency',fc,'Sensor',antenna);
radarplatform = phased.Platform('InitialPosition',[0;0;0]);
The transmitted signal is a linear FM waveform. Transmit one pulse per call to the step method.
waveform = phased.LinearFMWaveform('PulseWidth',50e-6,...
'OutputFormat','Pulses','NumPulses',1);
transmitter = phased.Transmitter('PeakPower',1000.0,'Gain',40);
channel = phased.FreeSpace('OperatingFrequency',fc,...
'TwoWayPropagation',true);
Specify the radar target to have a mean RCS of 1 m2 and be of the Swerling model type 3 or 4. You
can use Swerling 3 or 4 interchangeably.
target = phased.RadarTarget('MeanRCS',1,'OperatingFrequency',fc,...
'Model',tgtmodel);
collector = phased.Collector('OperatingFrequency',1e9,...
'Sensor',antenna);
wav = step(waveform);
filter = phased.MatchedFilter('Coefficients',getMatchedFilter(waveform));
z = zeros(Nscan,Np);
tp = zeros(Nscan,Np);
9-27
9 Environment and Target Models
Enter the loop. Set updatercs to true only for the first pulse of the scan.
for m = 1:Nscan
t0 = (m-1)*Trot;
t = t0;
for k = 1:Np
if k == 1
updatercs = true;
else
updatercs = false;
end
t = t + pri;
txwav = transmitter(wav);
[xradar,vradar] = radarplatform(pri);
[xtgt,vtgt] = targetplatform(pri);
[~,ang] = rangeangle(xtgt,xradar);
radwav = radiator(txwav,ang);
propwav = channel(radwav,xradar,xtgt,vradar,vtgt);
reflwav = target(propwav,updatercs);
collwav = collector(reflwav,ang);
y = filter(collwav);
z(m,k) = max(abs(y));
tp(m,k) = t;
end
end
Plot the amplitudes of the pulses for the scan as a function of time.
plot(tp(:),z(:),'.')
xlabel('Time (sec)')
ylabel('Pulse Amplitude')
9-28
Swerling 3 Target Models
9-29
9 Environment and Target Models
In Swerling 3 and Swerling 4 target models, the total RCS arises from a target consisting of one large
scattering surface with several other small scattering surfaces. The total RCS may vary with every
pulse in a scan (Swerling 4) or may be constant over a complete scan consisting of multiple pulses
(Swerling 3). In either case, the statistics obey a chi-squared probability density function with four
degrees of freedom.
For simplicity, start with a rotating radar having a rotation time of 5 seconds corresponding to a
rotation or scan rate of 72 degrees/sec.
Trot = 5.0;
scanrate = 360/Trot;
The radar has a main half-power beam width (HPBW) of 3.0 degrees. During the time that a target is
illuminated by the main beam, radar pulses strike the target and reflect back to the radar. The time
period during which the target is illuminated is called the dwell time. This time is also called a scan.
The radar will process 3 scans of the target.
HPBW = 3.0;
Tdwell = HPBW/scanrate;
Nscan = 3;
The number of pulses that arrive on target during the dwell time depends upon the pulse repetition
frequency (PRF). PRF is the inverse of the pulse repetition interval (PRI). Assume 5000 pulses are
transmitted per second.
prf = 5000.0;
pri = 1/prf;
Np = floor(Tdwell*prf);
You create a Swerling 4 target by properly employing the step method of the RadarTarget System
object™. To effect a Swerling 4 model, set the Model property of the phased.RadarTarget System
object™ to either 'Swerling3' or 'Swerling4'. Both are equivalent. Then, at the every call to the
step method, set the updatercs argument to true. This means that the radar cross section is
updated for every pulse in the scan.
tgtmodel = 'Swerling4';
Set up the radiating antenna. Assume the operating frequency of the antenna is 1GHz.
9-30
Swerling 4 Target Models
fc = 1e9;
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
radiator = phased.Radiator('OperatingFrequency',fc, ...
'Sensor',antenna);
The transmitted signal is a linear FM waveform. Transmit one pulse per call to the step method.
waveform = phased.LinearFMWaveform('PulseWidth',50e-6, ...
'OutputFormat','Pulses','NumPulses',1);
Specify the radar target to have a mean RCS of 1 m2 and be of the Swerling model type 1 or 2. You
can use Swerling 1 or 2 interchangeably.
target = phased.RadarTarget('MeanRCS',1,'OperatingFrequency',fc, ...
'Model',tgtmodel);
9-31
9 Environment and Target Models
Enter the loop. Set updatercs to true only for all pulses of the scan.
for m = 1:Nscan
t0 = (m-1)*Trot;
t = t0;
updatercs = true;
for k = 1:Np
t = t + pri;
txwav = transmitter(wav);
[xradar,vradar] = radarplatform(pri);
[xtgt,vtgt] = targetplatform(pri);
[~,ang] = rangeangle(xtgt,xradar);
radwav = radiator(txwav,ang);
propwav = channel(radwav,xradar,xtgt,vradar,vtgt);
reflwav = target(propwav,updatercs);
collwav = collector(reflwav,ang);
y = step(filter,collwav);
z(m,k) = max(abs(y));
tp(m,k) = t;
end
end
Plot the amplitudes of the pulses for the scan as a function of time.
plot(tp(:),z(:),'.')
xlabel('Time (sec)')
ylabel('Pulse Amplitude')
9-32
Swerling 4 Target Models
hist(z(:),25)
xlabel('Pulse Amplitude')
9-33
9 Environment and Target Models
9-34
Clutter Modeling
Clutter Modeling
In this section...
“Surface Clutter Overview” on page 9-35
“Approaches for Clutter Simulation or Analysis” on page 9-35
“Considerations for Setting Up a Constant Gamma Clutter Simulation” on page 9-35
“Related Examples” on page 9-37
If you are simulating a radar system, you might want to incorporate surface clutter into the
simulation to ensure the system can overcome the effects of surface clutter. If you are analyzing the
statistical performance of a radar system, you might want to incorporate clutter return distributions
into the analysis.
• billingsleyicm
• depressionang
• effearthradius
• grazingang
• horizonrange
• surfclutterrcs
• surfacegamma
The ConstantGammaClutter object has properties that correspond to physical aspects of the
situation you are modeling. These properties include:
9-35
9 Environment and Target Models
• Propagation speed, sample rate, and pulse repetition frequency of the signal
• Operating frequency of the system
• Altitude, speed, and direction of the radar platform
• Depression angle of the broadside of the radar antenna array
Clutter-Related Properties
The object has properties that correspond to the clutter characteristics, location, and modeling
fidelity. These properties include:
• Gamma parameter that depends on the terrain type and system’s operating frequency.
• Azimuth coverage and maximum range for the clutter simulation.
• Azimuth span of each clutter patch. The software internally divides the clutter ring into a series of
adjacent, nonoverlapping clutter patches.
• Clutter coherence time. This value indicates how frequently the software changes the set of
random numbers in the clutter simulation.
In the simulation, you can use identical random numbers over a time interval or uncorrelated
random numbers. Simulation behavior slightly differs from reality, where a moving platform
produces clutter returns that are correlated with each other over small time intervals.
The ConstantGammaClutter object has properties that let you obtain results in a convenient
format. Using the OutputFormat property, you can choose to have the step method produce a
signal that represents:
• A fixed number of pulses. You indicate the number of pulses using the NumPulses property of the
object.
• A fixed number of samples. You indicate the number of samples using the NumSamples property of
the object. Typically, you use the number of samples in one pulse. In staggered PRF applications,
you might find this option more convenient because the step output always has the same matrix
size.
Assumptions
9-36
Clutter Modeling
Related Examples
• “Ground Clutter Mitigation with Moving Target Indication (MTI) Radar” on page 17-461
• “Introduction to Space-Time Adaptive Processing” on page 17-231
• “DPCA Pulse Canceller to Reject Clutter” on page 7-7
• “Adaptive DPCA Pulse Canceller To Reject Clutter and Interference” on page 7-12
• “Sample Matrix Inversion Beamformer” on page 7-18
9-37
9 Environment and Target Models
Barrage Jammer
In this section...
“Support for Modeling Barrage Jammer” on page 9-38
“Model Barrage Jammer Output” on page 9-38
“Model Effect of Barrage Jammer on Target Echo” on page 9-40
The real and imaginary parts of the complex white Gaussian noise sequence each have variance equal
to 1/2 the effective radiated power in watts. Denote the effective radiated power in watts by P. The
barrage jammer output is:
P P
w[n] = 2
x[n] + j 2
y[n]
In this equation, x[n] and y[n] are uncorrelated sequences of zero-mean Gaussian random variables
with unit variance.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
rng default
jammer = phased.BarrageJammer('ERP',5000,...
'SamplesPerFrame',500);
y = jammer();
subplot(2,1,1)
histogram(real(y))
title('Histogram of Real Part')
subplot(2,1,2)
histogram(imag(y))
9-38
Barrage Jammer
mean(real(y))
ans = -1.0961
mean(imag(y))
ans = -2.1671
which are effectively zero. The standard deviations of the real and imaginary parts are
std(real(y))
ans = 50.1950
std(imag(y))
ans = 49.7448
9-39
9 Environment and Target Models
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
antenna = phased.ULA(4);
Fs = 1e6;
fc = 1e9;
rng('default')
waveform = phased.RectangularWaveform('PulseWidth',100e-6,...
'PRF',1e3,'NumPulses',5,'SampleRate',Fs);
transmitter = phased.Transmitter('PeakPower',1e4,'Gain',20,...
'InUseOutputPort',true);
radiator = phased.Radiator('Sensor',antenna,'OperatingFrequency',fc);
jammer = phased.BarrageJammer('ERP',1000,...
'SamplesPerFrame',waveform.NumPulses*waveform.SampleRate/waveform.PRF);
target = phased.RadarTarget('Model','Nonfluctuating',...
'MeanRCS',1,'OperatingFrequency',fc);
targetchannel = phased.FreeSpace('TwoWayPropagation',true,...
'SampleRate',Fs,'OperatingFrequency', fc);
jammerchannel = phased.FreeSpace('TwoWayPropagation',false,...
'SampleRate',Fs,'OperatingFrequency', fc);
collector = phased.Collector('Sensor',antenna,...
'OperatingFrequency',fc);
amplifier = phased.ReceiverPreamp('EnableInputPort',true);
Assume that the array, target, and jammer are stationary. The array is located at the global origin,
(0,0,0). The target is located at (1000,500,0), and the jammer is located at (2000,2000,100).
Determine the directions from the array to the target and jammer.
Finally, transmit the rectangular pulse waveform to the target, reflect it off the target, and collect the
echo at the array. Simultaneously, the jammer transmits a jamming signal toward the array. The
jamming signal and echo are mixed at the receiver. Generate waveform
wav = waveform();
% Transmit waveform
[wav,txstatus] = transmitter(wav);
% Radiate pulse toward the target
wav = radiator(wav,tgtang);
% Propagate pulse toward the target
wav = targetchannel(wav,[0;0;0],targetloc,[0;0;0],[0;0;0]);
% Reflect it off the target
wav = target(wav);
% Collect the echo
wav = collector(wav,tgtang);
9-40
Barrage Jammer
jamsig = jammer();
% Propagate the jamming signal to the array
jamsig = jammerchannel(jamsig,jammerloc,[0;0;0],[0;0;0],[0;0;0]);
% Collect the jamming signal
jamsig = collector(jamsig,jamang);
Plot the result, and compare it with received waveform with and without jamming.
subplot(2,1,1)
t = unigrid(0,1/Fs,size(pulsewave,1)*1/Fs,'[)');
plot(t*1000,abs(pulsewave(:,1)))
title('Magnitudes of Pulse Waveform Without Jamming--Element 1')
ylabel('Magnitude')
subplot(2,1,2)
plot(t*1000,abs(pulsewave_jamsig(:,1)))
title('Magnitudes of Pulse Waveform with Jamming--Element 1')
xlabel('millisec')
ylabel('Magnitude')
9-41
10
Rectangular Coordinates
In this section...
“Definitions of Coordinates” on page 10-2
“Notation for Vectors and Points” on page 10-3
“Orthogonal Basis and Euclidean Norm” on page 10-3
“Orientation of Coordinate Axes” on page 10-3
“Rotations and Rotation Matrices” on page 10-4
Definitions of Coordinates
Construct a rectangular, or Cartesian, coordinate system for three-dimensional space by specifying
three mutually orthogonal coordinate axes. The following figure shows one possible specification of
the coordinate axes.
You can view the 3-tuple as a point in space, or equivalently as a vector in three-dimensional
Euclidean space. Viewed as a vector space, the coordinate axes are basis vectors and the vector gives
10-2
Rectangular Coordinates
the direction to a point in space from the origin. Every vector in space is uniquely determined by a
linear combination of the basis vectors. The most common set of basis vectors for three-dimensional
Euclidean space are the standard unit basis vectors:
[1 0 0], [0 1 0], [0 0 1]
Note In this software, all coordinate vectors are column vectors. For convenience, the
documentation represents column vectors in the format [x y z] without transpose notation.
Both the vector notation [x y z] and point notation (x,y,z) are used interchangeably. The interpretation
of the column vector as a vector or point depends on the context. If the column vector specifies the
axes of a coordinate system or direction, it is a vector. If the column vector specifies coordinates, it is
a point.
The standard distance measure in space is the l2 norm, or Euclidean norm. The Euclidean norm of a
vector [x y z] is defined by:
x2 + y2 + z2
The Euclidean norm gives the length of the vector measured from the origin as the hypotenuse of a
right triangle. The distance between two vectors [x0 y0 z0] and [x1 y1 z1] is:
2 2 2
(x0 − x1) + (y0 − y1) + (z0 − z1)
10-3
10 Coordinate Systems and Motion Modeling
If you take your right hand and point it along the positive x-axis with your palm facing the positive y-
axis and extend your thumb, your thumb indicates the positive direction of the z-axis.
The rotation matrices that rotate a vector around the x, y, and z-axes are given by:
1 0 0
Rx(α) = 0 cosα −sinα
0 sinα cosα
10-4
Rectangular Coordinates
cosβ 0 sinβ
Ry(β) = 0 1 0
−sinβ 0 cosβ
• Counterclockwise rotation around z-axis
cosγ −sinγ 0
Rz(γ) = sinγ cosγ 0
0 0 1
The following three figures show what positive rotations look like for each rotation axis:
10-5
10 Coordinate Systems and Motion Modeling
For any rotation, there is an inverse rotation satisfying A−1 A = 1. For example, the inverse of the x-
axis rotation matrix is obtained by changing the sign of the angle:
10-6
Rectangular Coordinates
1 0 0
Rx−1(α) = Rx( − α) = 0 cosα sinα = Rx′ (α)
0 −sinα cosα
This example illustrates a basic property: the inverse rotation matrix is the transpose of the original.
Rotation matrices satisfy A’A = 1, and consequently det(A) = 1. Under rotations, vector lengths are
preserved as well as the angles between vectors.
We can think of rotations in another way. Consider the original set of basis vectors, i, j, k, and rotate
them all using the rotation matrix A. This produces a new set of basis vectors i′, j,′ k′ related to the
original by:
i′ = Ai
j′ = Aj
k′ = Ak
Using the transpose, you can write the new basis vectors as a linear combinations of the old basis
vectors:
i′ i
j′ = A′ j
k′ k
Now any vector can be written as a linear combination of either set of basis vectors:
Using algebraic manipulation, you can derive the transformation of components for a fixed vector
when the basis (or coordinate system) rotates. This transformation uses the transpose of the rotation
matrix.
v′x vx vx
−1
v′y = A vy = A′ vy
v′z vz vz
The next figure illustrates how a vector is transformed as the coordinate system rotates around the x-
axis. The figure after shows how this transformation can be interpreted as a rotation of the vector in
the opposite direction.
10-7
10 Coordinate Systems and Motion Modeling
10-8
Rectangular Coordinates
See Also
More About
• “Global and Local Coordinate Systems” on page 10-17
10-9
10 Coordinate Systems and Motion Modeling
Spherical Coordinates
In this section...
“Support for Spherical Coordinates” on page 10-10
“Azimuth and Elevation Angles” on page 10-10
“Phi and Theta Angles” on page 10-11
“U and V Coordinates” on page 10-12
“Conversion between Rectangular and Spherical Coordinates” on page 10-13
“Broadside Angles” on page 10-14
“Convert Between Broadside Angles and Azimuth and Elevation” on page 10-16
Phased Array System Toolbox software natively supports the azimuth/elevation representation. The
software also provides functions for converting between the azimuth/elevation representation and the
other representations. See “Phi and Theta Angles” on page 10-11 and “U and V Coordinates” on
page 10-12.
• Use the azimuth angle, az, and the elevation angle, el, to define the location of a point on the unit
sphere.
• Specify all angles in degrees.
• List coordinates in the sequence (az,el,R).
The azimuth angle of a vector is the angle between the x-axis and the orthogonal projection of the
vector onto the xy plane. The angle is positive in going from the x axis toward the y axis. Azimuth
angles lie between –180 and 180 degrees. The elevation angle is the angle between the vector and its
orthogonal projection onto the xy-plane. The angle is positive when going toward the positive z-axis
from the xy plane. By default, the boresight direction of an element or array is aligned with the
positive x-axis. The boresight direction is the direction of the main lobe of an element or array.
Note The elevation angle is sometimes defined in the literature as the angle a vector makes with the
positive z-axis. The MATLAB® and Phased Array System Toolbox products do not use this definition.
This figure illustrates the azimuth angle and elevation angle for a vector shown as a green solid line.
10-10
Spherical Coordinates
The phi angle (φ) is the angle from the positive y-axis to the vector’s orthogonal projection onto the yz
plane. The angle is positive toward the positive z-axis. The phi angle is between 0 and 360 degrees.
The theta angle (θ) is the angle from the x-axis to the vector itself. The angle is positive toward the yz
plane. The theta angle is between 0 and 180 degrees.
The figure illustrates phi and theta for a vector that appears as a green solid line.
10-11
10 Coordinate Systems and Motion Modeling
The coordinate transformations between φ/θ and az/el are described by the following equations
sinel = sinϕsinθ
tanaz = cosϕtanθ
cosθ = coselcosaz
tanϕ = tanel/sinaz
U and V Coordinates
In radar applications, it is often useful to parameterize the hemisphere x ≥ 0 using coordinates
denoted by u and v.
• To convert the φ/θ representation to and from the corresponding u/v representation, use
coordinate conversion functions phitheta2uv and uv2phitheta.
10-12
Spherical Coordinates
• To convert the azimuth/elevation representation to and from the corresponding u/v representation,
use coordinate conversion functions azel2uv and uv2azel.
u = sinθcosϕ
v = sinθsinϕ
In these expressions, φ and θ are the phi and theta angles, respectively.
u = coselsinaz
v = sinel
−1 ≤ u ≤ 1
−1 ≤ v ≤ 1
u2 + v2 ≤ 1
Conversely, the phi and theta angles can be written in terms of u and v using
tanϕ = u/v
sinθ = u2 + v2
The azimuth and elevation angles can also be written in terms of u and v
sinel = v
u
tanaz =
1 − u2 − v2
R = x2 + y2 + z2
az = tan−1(y/x)
el = tan−1(z/ x2 + y2)
x = Rcos(el)cos(az)
y = Rcos(el)sin(az)
z = Rsin(el)
10-13
10 Coordinate Systems and Motion Modeling
When specifying a target’s location with respect to a phased array, it is common to refer to its
distance and direction from the array. The distance from the array corresponds to R in spherical
coordinates. The direction corresponds to the azimuth and elevation angles.
Tip To convert between rectangular coordinates and (az,el,R), use the MATLAB functions cart2sph
and sph2cart. These functions specify angles in radians. To convert between degrees and radians,
use deg2rad and rad2deg.
Broadside Angles
Broadside angles are useful when describing the response of a uniform linear array (ULA). The array
response depends directly on the broadside angle and not on the azimuth and elevation angles. Start
with a ULA and draw a plane orthogonal to the ULA axis as shown in blue in the figure. The broadside
angle, β, is the angle between the plane and the signal direction. To compute the broadside angle,
construct a line from any point on the signal path to the plane, orthogonal to the plane. The angle
between these two lines is the broadside angle and lies in the interval [–90°,90°]. The broadside angle
is positive when measured toward the positive direction of the array axis. Zero degrees indicates a
signal path orthogonal to the array axis. ±90° indicates paths along the array axis. All signal paths
having the same broadside angle form a cone around the ULA axis.
The conversion from azimuth angle, az, and elevation angle, el, to broadside angle, β, is
−1
β = sin (sin(az)cos(el))
• For an elevation angle of zero, the broadside angle equals the azimuth angle.
• Elevation angles equally above and below the xy plane result in identical broadside angles.
You can convert from broadside angle to azimuth angle but you must specify the elevation angle
10-14
Spherical Coordinates
−1 sinβ
az = sin
cos(el)
Because the signals paths for a given broadside angle, β, form a cone around the array axis, you
cannot specify the elevation angle arbitrarily. The elevation angle and broadside angle must satisfy
el + β ≤ 90
The following figure depicts a ULA with elements spaced d meters apart along the y-axis. The ULA is
irradiated by a plane wave emitted from a point source in the far field. For convenience, the elevation
angle is zero degrees. In this case, the signal direction lies in the xy-plane. Then, the broadside angle
reduces to the azimuth angle.
Because of the angle of arrival, the array elements are not simultaneously illuminated by the plane
wave. The additional distance the incident wave travels between array elements is d sinβ where d is
the distance between array elements. The constant time delay, τ, between array elements is
dsinβ
τ= ,
c
For broadside angles of ±90°, the signal is incident on the array parallel to the array axis and the
time delay between sensors equals ±d/c. For a broadside angle of zero, the plane wave illuminates all
elements of the ULA simultaneously and the time delay between elements is zero.
Phased Array System Toolbox software provides functions az2broadside and broadside2az for
converting between azimuth and broadside angles.
10-15
10 Coordinate Systems and Motion Modeling
A target is located at an azimuth angle of 45° and at an elevation angle of 60° relative to a ULA.
Determine the corresponding broadside angle.
bsang = az2broadside(45,60)
bsang = 20.7048
Calculate the azimuth for an incident signal arriving at a broadside angle of 45° and an elevation of
20°.
az = broadside2az(45,20)
az = 48.8063
10-16
Global and Local Coordinate Systems
In this section...
“Global Coordinate System” on page 10-17
“Local Coordinate Systems” on page 10-17
“Converting Between Global and Local Coordinate Systems” on page 10-29
“Convert Local Spherical Coordinates to Global Rectangular Coordinates” on page 10-29
“Convert Global Rectangular Coordinates to Local Spherical Coordinates” on page 10-30
You can model the motion of all objects using the phased.Platform System object. This System
object computes the position and speed of objects using constant-velocity or constant-acceleration
models.
You can model the signals that propagate between objects in your scenario. The ray paths that
connect transmitters, targets, and receivers are specified in global coordinates. You can propagate
signals using these System objects: phased.FreeSpace, phased.WidebandFreeSpace,
phased.LOSChannel, or phased.WidebandLOSChannel. If you model two-ray multipath
propagation using the phased.TwoRayChannel System object, the boundary plane is set at z = 0 in
the global coordinate system.
Because signals propagate in the global coordinate system, you need to be able to convert local
coordinates to the global coordinates. You do this by constructing a 3-by-3 orthonormal matrix of
coordinate axes. The matrix columns represent the three orthogonal direction vectors of the local
coordinates expressed in the global coordinate system. The coordinate axes of a local coordinate
system must be orthonormal, but they need not be parallel to the global coordinate axes.
When you need to compute the range and arrival angles of a signal, you can use the rangeangle
function. When you call this function with the source and receiver position expressed in global
coordinates, the function returns the range and arrival angles, azimuth and elevation, with respect to
the axes of the global system. However, when you pass the orientation matrix as an additional
argument, the azimuth and elevation are now defined with respect to the local coordinate system.
10-17
10 Coordinate Systems and Motion Modeling
• the location and orientation of antenna or microphone elements of an array. The beam pattern of
an antenna array depends upon the angle of arrival or emission of radiation with respect to the
array local coordinates.
• the reflected energy from a target is a function of the incident and reflection angles with respect
to the target local coordinate axes.
• an airplane may have a local coordinate system with the x-axis aligned along the fuselage axis of
the body and the y-axis pointing along the port wing. Choose the z-axis to form a right-handed
coordinate system.
• a vehicle-mounted planar phased array may have a local coordinate system adapted to the array.
The x-axis of the coordinate system may point along the array normal vector.
The following figure illustrates the relationship of local and global coordinate systems in a bistatic
radar scenario. The thick solid lines represent the coordinate axes of the global coordinate system.
There are two phased arrays: a 5-by-5 transmitting uniform rectangular array (URA) and 5-by-5
receiving URA. Each of the phased arrays carries its own local coordinate system. The target,
indicated by the red arrow, also carries a local coordinate system.
The next few sections review the local coordinate systems used by arrays.
The positions of the elements of any Phased Array System Toolbox array are always defined in a local
coordinate system. When you use any of the System objects that create uniform arrays, the array
element positions are defined automatically with respect to a predefined local coordinate system. The
arrays for which this property holds are the phased.ULA, phased.URA, phased.UCA,
phased.HeterogeneousULA, and phased.HeterogeneousURA System objects. For these System
objects, the arrays are described using a few parameters such as element spacing and number of
elements. The positions of the elements are then defined with respect to the array origin located at
(0,0,0) which is the geometric center of the array. The geometric center is a good approximation to
the array phase center. The phase center of an array is the point from which the radiating waves
appear to emanate when observed in the far field. For example, for a ULA with an odd number of
elements, the elements are located at distances (-2d,-d,0,d,2d) along the array axis.
10-18
Global and Local Coordinate Systems
There are array System objects for which you must explicitly specify the element coordinates. You can
use these objects for creating arbitrary array shapes. These objects are the
phased.ConformalArray and phased.HeterogeneousConformalArray System objects. For
these arrays, the phase center of the array need not coincide with the array origin or geometric
center.
Element Boresight Directions
In addition to element positions, you need to specify the element orientations, that is, the directions
in which the elements point. Some elements are highly directional — most of their radiated energy
flows in one direction, called the main response axis (MRA). Others are omnidirectional. Element
orientation is the pointing direction of the MRA. You specify element orientation using azimuth and
elevation in the array local coordinate system. The direction that an antenna or microphone MRA
faces when transmitting or receiving a signal is also called the boresight or look direction. For the
uniform arrays, all boresight directions of all elements are determined by array parameters. For
conformal arrays, you specify the boresight direction of each element independently.
A uniform linear array (ULA) is an array of antenna or microphone elements that are equidistantly
spaced along a straight line. In the Phased Array System Toolbox, the phased.ULA System object
creates a ULA array. The geometry of the ULA orientation of its elements is determined by three
parameters: the number of elements, the distance between elements, and the ArrayAxis property.
For the ULA, the local coordinate system is adapted to the array — the elements are automatically
assigned positions in the local coordinate system.
The positions of elements in the array are determined by the ArrayAxis property which can take the
values 'x', 'y' or 'z'. The array axis property determines the axis on which all elements are
defined. For example, when the ArrayAxis property is set to 'x', the array elements lie along the x-
axis. The elements are positioned symmetrically with respect to the origin. Therefore, the geometric
center of the array lies at the origin of the coordinate system.
This figure shows a four-element ULA with directional elements in a local right-handed coordinate
system. The elements lie on the y-axis with their boresight axes pointing in the x-direction. In this
case, the ArrayAxis property is set to 'y.
10-19
10 Coordinate Systems and Motion Modeling
In a ULA, the boresight directions of every element point in the same direction. The direction is
orthogonal to the array axis. This direction depends upon the choice of ArrayAxis property.
Construct two examples of a uniform linear array and display the coordinates of the elements with
respect to the local coordinate systems defined by the arrays.
sULA = phased.ULA('NumElements',4,'ElementSpacing',0.5);
ElementLocs = getElementPosition(sULA)
ElementLocs = 3×4
0 0 0 0
-0.7500 -0.2500 0.2500 0.7500
10-20
Global and Local Coordinate Systems
0 0 0 0
viewArray(sULA)
The origin of the array-centric local coordinate system is set to the phase center of the array. The
phase center is the average value of the array element positions.
disp(mean(ElementLocs'))
0 0 0
Because the array has an even number of elements, no element of the array actually lies at the phase
center (0,0,0).
sULA1 = phased.ULA('NumElements',5,'ElementSpacing',0.3);
ElementLocs = getElementPosition(sULA1)
ElementLocs = 3×5
0 0 0 0 0
-0.6000 -0.3000 0 0.3000 0.6000
0 0 0 0 0
viewArray(sULA1)
10-21
10 Coordinate Systems and Motion Modeling
Because the array has an odd number of elements in each row and column, the center element of the
array lies at the phase center.
10-22
Global and Local Coordinate Systems
In a URA, like the ULA, the boresight directions of every element point in the same direction. You
control this direction using the ArrayNormal property. For the URA shown in the preceding figure,
the ArrayNormal property is set to 'x'. Then, element boresights point along the x-axis.
Construct two examples of uniform rectangular arrays and display the coordinates of the elements
with respect to the local coordinate systems defined by the arrays.
10-23
10 Coordinate Systems and Motion Modeling
ElementLocs = 3×8
0 0 0 0 0 0 0 0
-0.7500 -0.7500 -0.2500 -0.2500 0.2500 0.2500 0.7500 0.7500
0.2500 -0.2500 0.2500 -0.2500 0.2500 -0.2500 0.2500 -0.2500
viewArray(sURA)
The phase center of the array is the mean value of the array element positions. The origin of the array
local coordinate system is set to the phase center of the array.
disp(mean(ElementLocs'))
0 0 0
Because the array has an even number of elements in each row and column, no element of the array
actually lies at the phase center (0,0,0).
ElementLocs = 3×15
10-24
Global and Local Coordinate Systems
0 0 0 0 0 0 0 0 0
-0.3000 -0.3000 -0.3000 -0.3000 -0.3000 0 0 0 0
0.6000 0.3000 0 -0.3000 -0.6000 0.6000 0.3000 0 -0.3000 -0.6
viewArray(sURA1)
Because the array has an odd number of elements in each row and column, the center element of the
array lies at the phase center.
A signal arrives at the array from a point 1000 meters from along the +x-axis of the global coordinate
system. The local URA array is rotated 30 degrees clockwise around the y-axis. Compute the angle of
arrival of the signal in the local array axes.
laxes = roty(30);
[rng,ang] = rangeangle([1000,0,0]',[0,0,0]',laxes)
rng = 1.0000e+03
ang = 2×1
0
30.0000
10-25
10 Coordinate Systems and Motion Modeling
A uniform circular array (UCA) is an array of antenna or microphone elements spaced at equal angles
around a circle. The phased.UCA System object creates a special case of a UCA. In this case,
element boresight directions point away from the array origin like spokes of a wheel. The origin of
the local coordinate system is the geometric center of the array. The geometry of the UCA and the
location and orientation of its elements is determined by three parameters: the radius of the array,
the number of elements, and the ArrayNormal property. The elements are automatically assigned
positions in the local coordinate system. The positions are determined by the ArrayNormal property
which can take the values 'x', 'y' or 'z'. All elements lie in a plane passing through the origin and
orthogonal to the axis specified in this property. The phase center of the array coincides with the
geometric center. For example, when the ArrayNormal property is set to 'x', the array elements lie
in the yz-plane as shown in the figure. You can create a more general UCA with arbitrary boresight
directions using the phased.ConformalArray System object.
This figure shows an 8-element UCA with elements lying in the yz plane.
10-26
Global and Local Coordinate Systems
In a UCA defined by a phased.UCA System object, element boresight directions point radially
outward from the array origin. In the UCA shown in the preceding figure, because the ArrayNormal
property is set to 'x', the element boresight directions point radially outward in the yz-plane.
You can use phased.ConformalArray to create arrays of arbitrary shape. Unlike the case of
uniform arrays, you must specify the element positions explicitly. An N-element array requires the
specification of N 3-D coordinates in the array local coordinate system. The origin of a conformal
array can be located at any arbitrary point. The boresight directions of the elements of a conformal
array need not be parallel. The azimuth and elevation angles defining the boresight directions are
with respect to the local coordinate system. The phase center of the array does not need to coincide
with the geometric center. The same properties apply to the
phased.HeterogeneousConformalArray array.
This illustration shows the positions and orientations of a 4-element conformal array.
10-27
10 Coordinate Systems and Motion Modeling
Construct a 4-element array using the ConformalArray System object. Assume the operating
frequency is 900 MHz. Display the array geometry and normal vectors.
fc = 900e6;
c = physconst('LightSpeed');
lam = c/fc;
x = [1.0,-.5,0,.8]*lam/2;
y = [-.4,-1,.5,1.5]*lam/2;
z = [-.3,.3,0.4,0]*lam/2;
sIso = phased.CosineAntennaElement(...
'FrequencyRange',[0,1e9]);
nv = [-140,-140,90,90;80,80,80,80];
sConformArray = phased.ConformalArray('Element',sIso,...
'ElementPosition',[x;y;z],...
'ElementNormal',nv);
pos = getElementPosition(sConformArray)
pos = 3×4
normvec = getElementNormal(sConformArray)
10-28
Global and Local Coordinate Systems
normvec = 2×4
-140 -140 90 90
80 80 80 80
viewArray(sConformArray,'ShowIndex','All','ShowNormal',true)
10-29
10 Coordinate Systems and Motion Modeling
Convert the coordinates of the point to global rectangular coordinates. To convert from local
spherical coordinates to global rectangular coordinates, use the 'sr' option in the call to the
local2globalcoord function.
gCoord = 3×1
103 ×
1.6124
0.8536
0.8071
Convert the coordinates of the target to local spherical rectangular coordinates. To convert from
global rectangular coordinates to local spherical coordinates, use the 'rs' option in the call to the
global2localcoord function.
lCoord = 3×1
103 ×
0.0580
0.0006
4.7173
The output has the form (az,el,rng). The target is located in local spherical coordinates at 58°
azimuth, 0.6° elevation and 4717 m.
10-30
Global and Local Coordinate Systems Radar Example
fc = 1e9;
c = physconst('LightSpeed');
lam = c/fc;
First, set up the transmitting radar array. The transmitting array is a 5-by-5 uniform rectangular
array (URA) composed of isotropic antenna elements. The array is stationary and is located at the
position (50,50,50) meters in the global coordinate system. Although you position arrays in the global
system, array element positions are always defined in the array local coordinate system. The
transmitted signal strength in any direction is a function of the transmitting angle in the local array
coordinate system. Specify the orientation of the array. Without any orientation, local array axes are
aligned with the global coordinate system. Choose an array orientation so that the array normal
vector points approximately in the direction of the target. Do this by rotating the array 90° around
the z-axis. Then, rotate the array slightly by 2° around the y-axis and 1° around the z-axis again.
antenna = phased.IsotropicAntennaElement('BackBaffled',false);
txarray = phased.URA('Element',antenna','Size',[5,5],'ElementSpacing',0.4*lam*[1,1]);
txradarAx = rotz(1)*roty(2)*rotz(90);
txplatform = phased.Platform('InitialPosition',[50;50;50],...
'Velocity',[0;0;0],'InitialOrientationAxes',txradarAx,...
'OrientationAxesOutputPort',true);
radiator = phased.Radiator('Sensor',txarray,'PropagationSpeed',c,...
'WeightsInputPort',true,'OperatingFrequency',fc);
steervec = phased.SteeringVector('SensorArray',txarray,'PropagationSpeed',c,...
'IncludeElementResponse',true);
Next, position a target approximately 5 km from the transmitter along the global coordinate system y-
axis and moving in the x-direction. Typically, you specify radar cross-section values as functions of the
incident and reflected ray angles with respect to the local target axes. Choose any target orientation
with respect to the global coordinate system.
Simulate a non-fluctuating target, but allow the RCS to change at each call to the target method.
Set up a simple inline function, rcsval, to compute fictitious but reasonable values for RCS at
different ray angles.
tgtAx = rotz(10)*roty(15)*rotx(20);
tgtplatform = phased.Platform('InitialPosition',[100; 10000; 100],...
'MotionModel','Acceleration','InitialVelocity',[-50;0;0],'Acceleration',[.015;.015;0],...
'InitialOrientationAxes',tgtAx,'OrientationAxesOutputPort',true);
target = phased.RadarTarget('OperatingFrequency',fc,...
'Model','Nonfluctuating','MeanRCSSource','Input port');
rcsval = @(az1,el1,az2,el2) 2*abs(cosd((az1+az2)/2 - 90)*cosd((el1+el2)/2));
Finally, set up the receiving radar array. The receiving array is also a 5-by-5 URA composed of
isotropic antenna elements. The array is stationary and is located 150 meters in the z-direction from
the transmitting array. The received signal strength in any direction is a function of the incident angle
10-31
10 Coordinate Systems and Motion Modeling
of the signal in the local array coordinate system. Specify an orientation of the array. Choose an
orientation so that this array also points approximately in the y-direction towards the target but not
quite aligned with the first array. Do this by rotating the array 92° around the z-axis and then 5°
around the x-axis.
rxradarAx = rotx(5)*rotz(92);
rxradarAx = rotx(-.2)*rotz(92);
rxplatform = phased.Platform('InitialPosition',[50;50;200],...
'Velocity',[0;0;0],'InitialOrientationAxes',rxradarAx,...
'OrientationAxesOutputPort',true);
rxarray = phased.URA('Element',antenna','Size',[5,5],'ElementSpacing',0.4*lam*[1,1]);
In summary, four different coordinate systems are needed to describe the radar scenario. These are
The figure here illustrates the four coordinate systems. It is not to scale and does not accurately
represent the scenario in the example code.
10-32
Global and Local Coordinate Systems Radar Example
Use a linear FM waveform as the transmitted signal. Assume a sampling frequency of 1 MHz, a pulse
repetition frequency of 5 kHz, and a pulse length of 100 microseconds. Set the transmitter peak
output power to 1000 W and the gain to 40.0.
tau = 100e-6;
prf = 5000;
fs = 1e6;
waveform = phased.LinearFMWaveform('PulseWidth',tau,...
'OutputFormat','Pulses','NumPulses',1,'PRF',prf,'SampleRate',fs);
transmitter = phased.Transmitter('PeakPower',1000.0,'Gain',40);
filter = phased.MatchedFilter('Coefficients',getMatchedFilter(waveform));
Use free-space models for the propagation of the signal from the transmitting radar to the target and
back to the receiving radar.
channel1 = phased.FreeSpace('OperatingFrequency',fc,...
'TwoWayPropagation',false);
channel2 = phased.FreeSpace('OperatingFrequency',fc,...
'TwoWayPropagation',false);
Create a phase-shift beamformer. Point the mainlobe of the beamformer in a specific direction with
respect to the local receiver coordinate system. This direction is chosen to be one through which the
target passes at some time in its motion. This choice lets us demonstrate how the beamformer
response changes as the target passes through the mainlobe.
rxangsteer = [22.2244;-5.0615];
rxangsteer = [10;-.07];
beamformer = phased.PhaseShiftBeamformer('SensorArray',rxarray,...
'DirectionSource','Property','Direction',rxangsteer,...
'PropagationSpeed',c,'OperatingFrequency',fc);
Simulation loop
10-33
10 Coordinate Systems and Motion Modeling
Transmit 100 pulses of the waveform. Transmit one pulse every 100 milliseconds.
t = 0;
Npulse = 100;
dt = 1;
azes1 = zeros(Npulse,1);
elevs1 = zeros(Npulse,1);
azes2 = zeros(Npulse,1);
elevs2 = zeros(Npulse,1);
rxsig = zeros(Npulse,1);
for k = 1:Npulse
t = t + dt;
wav = waveform();
Update the positions of the radars and targets. All positions and velocities are defined with respect to
the global coordinate system. Because the OrientationAxesOutputPort property of the target
System object™ is set to true, you can obtain the instantaneous local target axes, tgtAx1, from the
target method. These axes are needed to compute the target RCS. The array local axes are fixed so
you do not need to update them.
[txradarPos,txradarVel] = txplatform(dt);
[rxradarPos,rxradarVel] = rxplatform(dt);
[tgtPos,tgtVel,tgtAx1] = tgtplatform(dt);
Compute the instantaneous range and direction of the target from the transmitting radar. The
strength of the transmitted wave depends upon the array gain pattern. This pattern is a function of
direction angles with respect to the local radar axes. You can compute the direction of the target with
respect to the transmitter local axes using the rangeangle function with an optional argument that
specifies the local radar axes, txradarAx. (Without this additional argument, rangeangle returns
the azimuth and elevation angles with respect to the global coordinate system).
[~,tgtang_tlcs] = rangeangle(tgtPos,txradarPos,txradarAx);
An alternative way to compute the direction angles is to first compute them in the global coordinate
system and then convert them using the global2localcoord function.
Create the transmitted waveform. The transmitted waveform is an amplified version of the generated
waveform.
txwaveform = transmitter(wav);
Radiate the signal in the instantaneous target direction. Recall that the radiator is not steered in this
direction but in an angle defined by the steering vector, txangsteer. The steering angle is chosen
because the target passes through this direction during its motion. A plot will let us see the
improvement in the response as the target moves into the main lobe of the radar.
txangsteer = [23.1203;-0.5357];
txangsteer = [10;-.07];
10-34
Global and Local Coordinate Systems Radar Example
sv1 = steervec(fc,txangsteer);
wavrad = radiator(txwaveform,tgtang_tlcs,conj(sv1));
Propagate the signal from the transmitting radar to the target. Propagation coordinates are in the
global coordinate system.
wavprop1 = channel1(wavrad,txradarPos,tgtPos,txradarVel,tgtVel);
Reflect the waveform from target back to the receiving radar array. Use the simple angle-dependent
RCS model defined previously. Inputs to the rcs-model are azimuth and elevation of the incoming and
reflected rays with respect to the local target coordinate system.
[~,txang_tgtlcs] = rangeangle(txradarPos,tgtPos,tgtAx1);
[~,rxang_tgtlcs] = rangeangle(rxradarPos,tgtPos,tgtAx1);
rcs = rcsval(txang_tgtlcs(1),txang_tgtlcs(2),rxang_tgtlcs(1),rxang_tgtlcs(2));
wavreflect = target(wavprop1,rcs);
ns = size(wavreflect,1);
tm = [0:ns-1]/fs*1e6;
Propagate the signal from the target to the receiving radar. As before, all coordinates for signal
propagation are expressed in the global coordinate system.
wavprop2 = channel2(wavreflect,tgtPos,rxradarPos,tgtVel,rxradarVel);
Compute the response of the receiving antenna array in the direction from which the radiation is
coming. First, use the rangeangle function to compute the direction of the target with respect to
the receiving array local axes, by specifying the receiver local coordinate system, rxradarAx.
[tgtrange_rlcs,tgtang_rlcs] = rangeangle(tgtPos,rxradarPos,rxradarAx);
Simulate an incoming plane wave at each element from the current direction of the target calculated
in the receiver local coordinate system.
wavcoll = collectPlaneWave(rxarray,wavprop2,tgtang_rlcs,fc);
Beamform the arriving wave. In this scenario, the receiver beamformer points in the direction,
rxangsteer, specified by the Direction property of the phased.PhaseShiftBeamformer
System object. When the target actually lies in that direction, the response of the array maximized.
wavbf = beamformer(wavcoll);
Perform match filtering of the beamformed received wave and then find and store the maximum value
of each pulse for display. This value will be plotted after the simulation loop ends.
y = filter(wavbf);
rxsig(k) = max(abs(y));
end
Plot the target track in azimuth and elevation with respect to the transmitter local coordinates. The
red circle denotes the direction toward which the transmitter array points.
10-35
10 Coordinate Systems and Motion Modeling
plot(azes1,elevs1,'.b')
grid
xlabel('Azimuth (degrees)')
ylabel('Elevation (degrees)')
title('Target Track in Transmitter Local Coordinates')
hold on
plot(txangsteer(1),txangsteer(2),'or')
hold off
Plot the target track in azimuth and elevation with respect to the receiver local coordinates. The red
circle denotes the direction toward which the beamformer points.
plot(azes2,elevs2,'.b')
axis([-5.0,25.0,-5.0,5])
grid
xlabel('Azimuth (degrees)')
ylabel('Elevation (degrees)')
title('Target Track in Receiver Local Coordinates')
hold on
plot(rxangsteer(1),rxangsteer(2),'or')
hold off
10-36
Global and Local Coordinate Systems Radar Example
Plot the returned signal amplitude vs azimuth in the receiver local coordinates. The value of the
amplitude depends on several factors.
plot(azes2,rxsig,'.')
grid
xlabel('Azimuth (degrees)')
ylabel('Amplitude')
title('Amplitude vs Azimuth in Receiver Local Coordinates')
10-37
10 Coordinate Systems and Motion Modeling
10-38
Motion Modeling in Phased Array Systems
Extended bodies can undergo both translational and rotational motion in space. Phased Array System
Toolbox software supports modeling of translational motion.
Modeling translational platform motion requires the specification of a position and velocity vector.
Specification of a position vector implies a coordinate system. In the Phased Array System Toolbox,
platform position and velocity are specified in a “Global Coordinate System” on page 10-17. You can
think of the platform position as the displacement vector from the global origin or as the coordinates
of a point with respect to the global origin.
Let r0 denote the position vector at time 0 and v denote the velocity vector. The position vector of a
platform as a function of time, r(t), is:
r(t) = r0 + vt
When the platform represents a sensor element or array, it is important to know the orientation of the
element or array local coordinate axes. For example, the orientation of the local coordinate axes is
necessary to extract angle information from incident waveforms. See “Global and Local Coordinate
10-39
10 Coordinate Systems and Motion Modeling
Systems” on page 10-17 for a description of global and local coordinate systems in the software.
Finally, for platforms with varying velocity, you must be able to update the velocity vector over time.
You can model platform position, velocity, and local axes orientation with the phased.Platform
object.
PRF = 1e3;
Tstep = 1/PRF;
Nsteps = 10;
Next, construct a platform object specifying the platform initial position and velocity. Assume that the
initial position of the platform is 100 meters from the origin at (60,80,0). Assume the speed is
approximately 30 meters per second (m/s) with the constant velocity vector given by (15,25.98,0) .
platform = phased.Platform('InitialPosition',[60;80;0],'Velocity',[15;25.98;0]);
The orientation of the local coordinate axes of the platform is the value of the
InitialOrientationAxes property. You can view the value of this property by entering
hplat.InitialOrientationAxes at the MATLAB™ command prompt. Because the
InitialOrientationAxes property is not specified in the construction of the phased.Platform
System object™, the property is assigned its default value of [1 0 0;0 1 0;0 0 1]. Use the step
method to simulate the translational motion of the platform.
initialPos = platform.InitialPosition;
for k = 1:Nsteps
pos = platform(Tstep);
end
finalPos = pos + platform.Velocity*Tstep;
distTravel = norm(finalPos - initialPos)
distTravel = 0.3000
The step method returns the current position of the platform and then updates the platform position
based on the time step and velocity. Equivalently, the first time you invoke the step method, the
output is the position of the platform at t = 0.
Recall that the platform is moving with a constant velocity of approximately 30 m/s. The total time
elapsed is 0.01 seconds. Invoking the step method returns the current position of the platform and
then updates that position. Accordingly, you expect the final position to differ from the initial position
by 0.30 meters. Confirm this difference by examining the value of distTravel.
10-40
Motion Modeling in Phased Array Systems
platform velocity over time. You can do so with the phased.Platform System Object™ because the
Velocity property is tunable.
This example models a target initially at rest. The initial velocity vector is (0,0,0) . Assume the time
step is 1 millisecond. After 500 milliseconds, the platform begins to move with a speed of
approximately 10 m/s. The velocity vector is (7.07,7.07,0). The platform continues at this velocity for
an additional 500 milliseconds.
Tstep = 1e-3;
Nsteps = 1/Tstep;
platform = phased.Platform('InitialPosition',[100;100;0]);
for k = 1:Nsteps/2
[tgtpos,tgtvel] = platform(Tstep);
end
platform.Velocity = [7.07; 7.07; 0];
for k = Nsteps/2+1:Nsteps
[tgtpos,tgtvel] = platform(Tstep);
end
PRF = 1e3;
Tstep = 1/PRF;
radar = phased.Platform('InitialPosition',[1000;1000;0]);
target = phased.Platform('InitialPosition',[5000;8000;0],...
'Velocity',[-30;-45;0]);
v = radialspeed(target.InitialPosition,target.Velocity,...
radar.InitialPosition);
Npulses = 10;
for num = 1:Npulses
tgtpos = target(Tstep);
end
[finalRng,finalAng] = rangeangle(tgtpos,radar.InitialPosition);
deltaRng = finalRng - initRng
10-41
10 Coordinate Systems and Motion Modeling
deltaRng = -0.5396
The constant velocity of the target is approximately 54 m/s. The total time elapsed is 0.01 seconds.
The range between the target and the radar should decrease by approximately 54 centimeters.
Compare the initial range of the target, initRng, to the final range, finalRng, to confirm that this
decrease occurs.
See Also
Related Examples
• “Introduction to Space-Time Adaptive Processing” on page 17-231
10-42
Model Motion of Circling Airplane
Specify the initial position and velocity of the airplane. The airplane has a ground range of 10 km and
an altitude of 20 km.
range = 10000;
alt = 20000;
initPos = [cosd(60)*range;sind(60)*range;alt];
originPos = [1000,1000,0]';
originVel = [0,0,0]';
vs = 150.0;
phi = atan2d(initPos(2)-originPos(2),initPos(1)-originPos(1));
phi1 = phi + 90;
vx = vs*cosd(phi1);
vy = vs*sind(phi1);
initVel = [vx,vy,-20]';
platform = phased.Platform('MotionModel','Acceleration',...
'AccelerationSource','Input port','InitialPosition',initPos,...
'InitialVelocity',initVel,'OrientationAxesOutputPort',true,...
'InitialOrientationAxes',eye(3));
relPos = initPos - originPos;
relVel = initVel - originVel;
rel2Pos = [relPos(1),relPos(2),0]';
rel2Vel = [relVel(1),relVel(2),0]';
r = sqrt(rel2Pos'*rel2Pos);
accelmag = vs^2/r;
unitvec = rel2Pos/r;
accel = -accelmag*unitvec;
T = 0.5;
N = 1000;
Specify the acceleration of an object moving in a circle in the x-y plane. The acceleration is v^2/r
towards the origin.
posmat = zeros(3,N);
r1 = zeros(N);
v = zeros(N);
for n = 1:N
[pos,vel,oax] = platform(T,accel);
posmat(:,n) = pos;
vel2 = vel(1)^2 + vel(2)^2;
v(n) = sqrt(vel2);
relPos = pos - originPos;
rel2Pos = [relPos(1),relPos(2),0]';
r = sqrt(rel2Pos'*rel2Pos);
r1(n) = r;
accelmag = vel2/r;
accelmag = vs^2/r;
10-43
10 Coordinate Systems and Motion Modeling
unitvec = rel2Pos/r;
accel = -accelmag*unitvec;
end
disp(oax)
posmat = posmat/1000;
figure(1)
plot3(posmat(1,:),posmat(2,:),posmat(3,:),'b.')
hold on
plot3(originPos(1)/1000,originPos(2)/1000,originPos(3)/1000,'ro')
xlabel('X (km)')
ylabel('Y (km)')
zlabel('Z (km)')
grid
hold off
10-44
Visualize Multiplatform Scenario
Specify the scenario refresh rate at 0.5 Hz. For 150 steps, the time duration of the scenario is 300 s.
updateRate = 0.5;
N = 150;
Set up the turning airplane using the Acceleration model of the phased.Platform System
object™. Specify the initial position of the airplane by range and azimuth from the ground-based
radar and its elevation. The airplane is 10 km from the radar at 60° azimuth and has an altitude of 6
km. The airplane is accelerating at 10 m/s² in the negative x-direction.
airplane1range = 10.0e3;
airplane1Azimuth = 60.0;
airplane1alt = 6.0e3;
airplane1Pos0 = [cosd(airplane1Azimuth)*airplane1range;...
sind(airplane1Azimuth)*airplane1range;airplane1alt];
airplane1Vel0 = [400.0;-100.0;-20];
airplane1Accel = [-10.0;0.0;0.0];
airplane1platform = phased.Platform('MotionModel','Acceleration',...
'AccelerationSource','Input port','InitialPosition',airplane1Pos0,...
'InitialVelocity',airplane1Vel0,'OrientationAxesOutputPort',true,...
'InitialOrientationAxes',eye(3));
Set up the stationary ground radar at the origin of the global coordinate system. To simulate a
rotating radar, change the ground radar beam steering angle in the processing loop.
groundRadarPos = [0,0,0]';
groundRadarVel = [0,0,0]';
groundradarplatform = phased.Platform('MotionModel','Velocity',...
'InitialPosition',groundRadarPos,'Velocity',groundRadarVel,...
'InitialOrientationAxes',eye(3));
groundVehiclePos = [5e3,2e3,0]';
groundVehicleVel = [50,50,0]';
groundvehicleplatform = phased.Platform('MotionModel','Velocity',...
'InitialPosition',groundVehiclePos,'Velocity',groundVehicleVel,...
'InitialOrientationAxes',eye(3));
airplane2Pos = [8.5e3,1e3,6000]';
airplane2Vel = [-300,100,20]';
airplane2platform = phased.Platform('MotionModel','Velocity',...
'InitialPosition',airplane2Pos,'Velocity',airplane2Vel,...
'InitialOrientationAxes',eye(3));
Set up the scenario viewer. Specify the radar as having a beam range of 8 km, a vertical beam width
of 30°, and a horizontal beam width of 2°. Annotate the tracks with position, speed, altitude, and
range.
10-45
10 Coordinate Systems and Motion Modeling
BeamSteering = [0;50];
viewer = phased.ScenarioViewer('BeamRange',8.0e3,'BeamWidth',[2;30],'UpdateRate',updateRate,...
'PlatformNames',{'Ground Radar','Turning Airplane','Vehicle','Airplane 2'},'ShowPosition',tru
'ShowSpeed',true,'ShowAltitude',true,'ShowLegend',true,'ShowRange',true,...
'Title','Multiplatform Scenario','BeamSteering',BeamSteering);
Step through the display processing loop, updating radar and target positions. Rotate the ground-
based radar steering angle by four degrees at each step.
for n = 1:N
[groundRadarPos,groundRadarVel] = groundradarplatform(updateRate);
[airplane1Pos,airplane1Vel,airplane1Axes] = airplane1platform(updateRate,airplane1Accel);
[vehiclePos,vehicleVel] = groundvehicleplatform(updateRate);
[airplane2Pos,airplane2Vel] = airplane2platform(updateRate);
viewer(groundRadarPos,groundRadarVel,[airplane1Pos,vehiclePos,airplane2Pos],...
[airplane1Vel,vehicleVel,airplane2Vel]);
BeamSteering = viewer.BeamSteering(1);
BeamSteering = mod(BeamSteering + 4,360.0);
if BeamSteering > 180.0
BeamSteering = BeamSteering - 360.0;
end
viewer.BeamSteering(1) = BeamSteering;
pause(0.2);
end
10-46
Visualize Multiplatform Scenario
10-47
10 Coordinate Systems and Motion Modeling
For a narrowband signal propagating at the speed of light, the one-way Doppler shift in hertz is:
v
Δf = ±
λ
where v is the relative radial speed of the target with respect to the transmitter. For a target
approaching the receiver, the Doppler shift is positive. For a target receding from the transmitter, the
Doppler shift is negative.
You can use speed2dop to convert the relative radial speed to the Doppler shift in hertz. You can use
dop2speed to determine the radial speed of a target relative to a receiver based on the observed
Doppler shift.
freq = 1e9;
v = 23.0;
lambda = physconst('LightSpeed')/freq;
dopplershift = speed2dop(v,lambda)
dopplershift = 76.7197
The one-way Doppler shift is approximately 76.72 Hz. Because the target approaches the receiver, the
Doppler shift is positive.
freq = 9e9;
df = 400.0;
10-48
Doppler Shift and Pulse-Doppler Processing
lambda = physconst('LightSpeed')/freq;
speed = dop2speed(df,lambda)
speed = 13.3241
The slow-time data are sampled at the pulse repetition frequency (PRF) and therefore the DFT of the
slow-time data for a given range bin yields an estimate of the Doppler spectrum from [-PRF/2, PRF/2]
Hz. Because the slow-time data are complex-valued, the DFT magnitudes are not necessarily an even
function of the Doppler frequency. This removes the ambiguity between a Doppler shift corresponding
to an approaching (positive Doppler shift), or receding (negative Doppler shift) target. The resolution
in the Doppler domain is PRF/N where N is the number of slow-time samples. You can pad the
spectral estimate of the slow-time data with zeros to interpolate the DFT frequency grid and improve
peak detection, but this does not improve the Doppler resolution.
• Detecting a target in the range dimension (fast-time samples). This gives the range bin to analyze
in the slow-time dimension.
• Computing the DFT of the slow-time samples corresponding to the specified range bin. Identify
significant peaks in the magnitude spectrum and convert the corresponding Doppler frequencies
to speeds.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
Define the System objects needed for this example and set their properties. Seed the random number
generator for the phased.ReceiverPreamp System object™ to produce repeatable results.
waveform = phased.RectangularWaveform('SampleRate',5e6,...
'PulseWidth',6e-7,'OutputFormat','Pulses',...
'NumPulses',1,'PRF',1e4);
10-49
10 Coordinate Systems and Motion Modeling
target = phased.RadarTarget('Model','Nonfluctuating',...
'MeanRCS',1,'OperatingFrequency',1e9);
targetpos = phased.Platform('InitialPosition',[1000; 1000; 0],...
'Velocity',[-100; -100; 0]);
antenna = phased.IsotropicAntennaElement(...
'FrequencyRange',[5e8 5e9]);
transmitter = phased.Transmitter('PeakPower',5e3,'Gain',20,...
'InUseOutputPort',true);
transpos = phased.Platform('InitialPosition',[0;0;0],...
'Velocity',[0;0;0]);
radiator = phased.Radiator('OperatingFrequency',1e9,'Sensor',antenna);
collector = phased.Collector('OperatingFrequency',1e9,'Sensor',antenna);
channel = phased.FreeSpace('SampleRate',waveform.SampleRate,...
'OperatingFrequency',1e9,'TwoWayPropagation',false);
receiver = phased.ReceiverPreamp('Gain',0,'LossFactor',0,...
'SampleRate',5e6,'NoiseFigure',5,...
'EnableInputPort',true,'SeedSource','Property','Seed',1e3);
This loop transmits ten successive rectangular pulses toward the target, reflects the pulses off the
target, collects the reflected pulses at the receiver, and updates the target position with the specified
constant velocity.
NumPulses = 10;
sig = waveform(); % get waveform
transpos = transpos.InitialPosition; % get transmitter position
rxsig = zeros(length(sig),NumPulses);
% transmit and receive ten pulses
for n = 1:NumPulses
% update target position
[tgtpos,tgtvel] = targetpos(1/waveform.PRF);
[tgtrng,tgtang] = rangeangle(tgtpos,transpos);
tpos(n) = tgtrng;
[txsig,txstatus] = transmitter(sig); % transmit waveform
txsig = radiator(txsig,tgtang); % radiate waveform toward target
txsig = channel(txsig,transpos,tgtpos,[0;0;0],tgtvel); % propagate waveform to target
txsig = target(txsig); % reflect the signal
% propagate waveform from the target to the transmiter
txsig = channel(txsig,tgtpos,transpos,tgtvel,[0;0;0]);
txsig = collector(txsig,tgtang); % collect signal
rxsig(:,n) = receiver(txsig,~txstatus); % receive the signal
end
The matrix rxsig contains the echo data in a 500-by-10 matrix where the row dimension contains the
fast-time samples and the column dimension contains the slow-time samples. In other words, each
row in the matrix contains the slow-time samples from a specific range bin.
Construct a linearly-spaced grid corresponding to the range bins from the fast-time samples. The
range bins extend from 0 meters to the maximum unambiguous range.
prf = waveform.PRF;
fs = waveform.SampleRate;
fasttime = unigrid(0,1/fs,1/prf,'[)');
rangebins = (physconst('LightSpeed')*fasttime)/2;
Next, detect the range bins which contain targets. In this simple scenario, no matched filtering or
time-varying gain compensation is utilized.
10-50
Doppler Shift and Pulse-Doppler Processing
−9
In this example, set the false-alarm probability to 10 . Use noncoherent integration of the ten
rectangular pulses and determine the corresponding threshold for detection in white Gaussian noise.
Because this scenario contains only one target, take the largest peak above the threshold. Display the
estimated target range.
probfa = 1e-9;
NoiseBandwidth = 5e6/2;
npower = noisepow(NoiseBandwidth,...
receiver.NoiseFigure,receiver.ReferenceTemperature);
thresh = npwgnthresh(probfa,NumPulses,'noncoherent');
thresh = sqrt(npower*db2pow(thresh));
[pks,range_detect] = findpeaks(pulsint(rxsig,'noncoherent'),...
'MinPeakHeight',thresh,'SortStr','descend');
range_estimate = rangebins(range_detect(1));
Extract the slow-time samples corresponding to the range bin containing the detected target.
Compute the power spectral density estimate of the slow-time samples using periodogram function
and find the peak frequency. Convert the peak Doppler frequency to speed using the dop2speed
function. A positive Doppler shift indicates that the target is approaching the transmitter. A negative
Doppler shift indicates that the target is moving away from the transmitter.
ts = rxsig(range_detect(1),:).';
[Pxx,F] = periodogram(ts,[],256,prf,'centered');
plot(F,10*log10(Pxx))
grid
xlabel('Frequency (kHz)')
ylabel('Power (dB)')
title('Periodogram Spectrum Estimate')
10-51
10 Coordinate Systems and Motion Modeling
[Y,I] = max(Pxx);
lambda = physconst('LightSpeed')/1e9;
tgtspeed = dop2speed(F(I)/2,lambda);
fprintf('Estimated range of the target is %4.2f meters.\n',...
range_estimate)
if F(I)>0
fprintf('The target is approaching the radar.\n')
else
fprintf('The target is moving away from the radar.\n')
end
The true radial speed of the target is detected within the Doppler resolution and the range of the
target is detected within the range resolution of the radar.
10-52
Doppler Shift and Pulse-Doppler Processing
See Also
Related Examples
• “Doppler Estimation” on page 17-295
• “Scan Radar Using a Uniform Rectangular Array” on page 17-468
10-53
11
Using Polarization
11 Using Polarization
Polarized Fields
In this section...
“Introduction to Polarization” on page 11-2
“Linear and Circular Polarization” on page 11-3
“Elliptic Polarization” on page 11-6
“Linear and Circular Polarization Bases” on page 11-9
“Sources of Polarized Fields” on page 11-12
“Scattering Cross-Section Matrix” on page 11-18
“Polarization Loss Due to Field and Receiver Mismatch” on page 11-22
“Model Radar Transmitting Polarized Radiation” on page 11-24
Introduction to Polarization
You can use the Phased Array System Toolbox software to simulate radar systems that transmit,
propagate, reflect, and receive polarized electromagnetic fields. By including this capability, the
toolbox can realistically model the interaction of radar waves with targets and the environment.
It is a basic property of plane waves in free-space that the directions of the electric and magnetic
field vectors are orthogonal to their direction of propagation. The direction of propagation of an
electromagnetic wave is determined by the Poynting vector
S=E×H
In this equation, E represents the electric field and H represents the magnetic field. The quantity, S,
represents the magnitude and direction of the wave’s energy flux. Maxwell’s equations, when applied
to plane waves, produce the result that the electric and magnetic fields are related by
E = − Zs × H
1
H = s×E
Z
The vector s, the unit vector in the S direction, represents the direction of propagation of the wave.
The quantity, Z, is the wave impedanceand is a function of the electric permittivity and the magnetic
permeability of medium in which the wave travels.
After manipulating the two equations, you can see that the electric and magnetic fields are
orthogonal to the direction of propagation
E · s = H · s = 0.
This last result proves that there are really only two independent components of the electric field,
labeled Ex and Ey. Similarly, the magnetic field can be expressed in terms of two independent
components. Because of the orthogonality of the fields, the electric field can be represented in terms
of two unit vectors orthogonal to the direction of propagation.
E = Ex e x + Ey e y
The unit vectors together with the unit vector in direction of propagation
11-2
Polarized Fields
e x, e y, s .
form a right-handed orthonormal triad. Later, these vectors and the coordinates they define will be
related to the coordinates of a specific radar system. In radar systems, it is common to use the
subscripts, H and V, denoting the horizontal and vertical components, instead of x and y. Because the
electric and magnetic fields are determined by each other, only the properties of the electric field
need be consider.
For a radar system, the electric and magnetic field are actually spherical waves, rather than plane
waves. However, in practice, these fields are usually measured in the far field region or radiation
zone of the radar source and are approximately plane waves. In the far field, the waves are called
quasi-plane waves. A point lies in the far field if its distance, R, from the source satisfies R ≫D2/λ
where D is a typical dimension of the source, whether it is a single antenna or an array of antennas.
Polarization applies to purely sinusoidal signals. The most general expression for a sinusoidal plane-
wave has the form
E = Ex0cos ωt − k · x + ϕx e x + Ey0cos ωt − k · x + ϕy e y = Ex e x + Ey e y
The quantities Ex0 and Ey0 are the real-valued, non-negative, amplitudes of the components of the
electric field and ϕx and ϕy are field’s phases. This expression is the most general one used for a
polarized wave. An electromagnetic wave is polarized if the ratio of the amplitudes of its components
and phase difference between it components do not change with time. The definition of polarization
can be broadened to include narrowband signals, for which the bandwidth is small compared to the
center or carrier frequency of the signal. The amplitude ratio and phases difference vary slowly with
time when compared to the period of the wave and may be thought of as constant over many
oscillations.
You can usually suppress the spatial dependence of the field and write the electric field vector as
E = Ex0cos ωt + ϕx e x + Ey0cos ωt + ϕy e y = Ex e x + Ey e y
Ey0
Ey = E
Ex0 x
This equation represents a straight line through the origin with positive slope. Conversely, suppose ϕx
= ϕy + π. Then, the tip of the electric field vector follows a straight line through the origin with
negative slope
Ey0
Ey = − E
Ex0 x
These two polarization cases are named linear polarized because the field always oscillates along a
straight line in the orthogonal plane. If Ex0= 0, the field is vertically polarized, and if Ey0 = 0 the field
is horizontally polarized.
11-3
11 Using Polarization
A different case occurs when the amplitudes are the same, Ex = Ey, but the phases differ by ±π/2
Ex = E0cos(ωt + ϕ)
Ey = E0cos(ωt + ϕ ± π/2) = ∓ E0sin(ωt + ϕ)
By squaring both sides, you can show that the tip of the electric field vector obeys the equation of a
circle
While this equation gives the path the vector takes, it does not tell you in what direction the electric
field vector travels around the circle. Does it rotate clockwise or counterclockwise? The rotation
direction depends upon the sign of π/2 in the phase. You can see this dependency by examining the
motion of the tip of the vector field. Assume the common phase angle, ϕ = 0. This assumption is
permissible because the common phase only determines starting position of the vector and does not
change the shape of its path. First, look at the +π/2 case for a wave travelling along the s-direction
(out of the page). At t=0, the vector points along the x-axis. One quarter period later, the vector
points along the negative y-axis. After another quarter period, it points along the negative x-axis.
MATLAB uses the IEEE convention to assign the names right-handed or left-handed polarization to
the direction of rotation of the electric vector, rather than clockwise or counterclockwise. When using
this convention, left or right handedness is determined by pointing your left or right thumb along the
direction of propagation of the wave. Then, align the curve of your fingers to the direction of rotation
of the field at a given point in space. If the rotation follows the curve of your left hand, then the wave
is left-handed polarized. If the rotation follows the curve of your right hand, then the wave is right-
handed polarized. In the preceding scenario, the field is left-handed circularly polarized (LHCP). The
phase difference –π/2 corresponds to right-handed circularly polarized wave (RHCP). The following
11-4
Polarized Fields
figure provides a three-dimensional view of what a LHCP electromagnetic wave looks like as it moves
in the s-direction.
When the terms clockwise or counterclockwise are used they depend upon how you look at the wave.
If you look along the direction of propagation, then the clockwise direction corresponds to right-
handed polarization and counterclockwise corresponds to left-handed polarization. If you look toward
where the wave is coming from, then clockwise corresponds to left-handed polarization and
counterclockwise corresponds to right-handed polarization.
The figure below shows the appearance of linear and circularly polarized fields as they move towards
you along the s-direction.
11-5
11 Using Polarization
Elliptic Polarization
Besides the linear and circular states of polarization, a third type of polarization is elliptic
polarization. Elliptic polarization includes linear and circular polarization as special cases.
As with linear or circular polarization, you can remove the time dependence to obtain the locus of
points that the tip of the electric field vector travels
Ex 2 Ey 2 Ex Ey 2
+ −2 cosϕ = sin ϕ
Ex0 Ey0 Ex0 Ey0
In this case, φ = φy – φx. This equation represents a tilted two-dimensional ellipse. Its size and shape
are determined by the component amplitudes and phase difference. The presence of the cross term
11-6
Polarized Fields
indicates that the ellipse is tilted. The equation does not, just as in the circularly polarized case,
provide any information about the rotation direction. For example, the following figure shows the
instantaneous state of the electric field but does not indicate the direction in which the field is
rotating.
The size and shape of a two-dimensional ellipse can be defined by three parameters. These
parameters are the lengths of its two axes, the semi-major axis, a, and semi-minor axis, b, and a tilt
angle, τ. The following figure illustrates the three parameters of a tilted ellipse. You can derive them
from the two electric field amplitudes and phase difference.
Polarization Ellipse
Polarization can best be understood in terms of complex signals. The complex representation of a
polarized wave has the form
iϕx iωt iϕy iωt iϕx iϕy
E = Ex0e e e x + Ey0e e e y = Ex0e e x + Ey0e ey eiωt
Define the complex polarization ratio as the ratio of the complex amplitudes
Ey0 i ϕ − ϕ Ey0 iϕ
ρ= e y x = e
Ex0 Ex0
where ϕ = ϕy – ϕx.
It is useful to introduce the polarization vector. For the complex polarized electric field above, the
polarization vector, P, is obtained by normalizing the electric field
11-7
11 Using Polarization
The overall size of the polarization ellipse is not important because that can vary as the wave travels
through space, especially through geometric attenuation. What is important is the shape of the
ellipse. Thus, the significant ellipse parameters are the ratio of its axis dimensions, a/b, called the
axial ratio, and the tilt angle, τ. Both of these quantities can be determined from the ratio of the
component amplitudes and the phase difference, or, equivalently, from the polarization ratio. Another
quantity, equivalent to the axial ratio, is the ellipticity angle, ε.
In the Phased Array System Toolbox software, you can use the polratio function to convert the
complex amplitudes fv=[Ey;Ex] to the polarization ratio.
p = polratio(fv)
Tilt Angle
The tilt angle is defined as the positive (counterclockwise) rotation angle from the x-axis to the semi-
major axis of the ellipse. Because of the symmetry properties of the ellipse, the tilt angle, τ, need only
be defined in the range –π/2 ≤ τ ≤ π/2. You can find the tilt angle by determining the rotated
coordinate system in which the semi-major and semi-minor axes align with the rotated coordinate
axes. Then, the ellipse equation has no cross-terms. The solution takes the form
2Ex0Ey0
tan2τ = 2 2
cosϕ
Ex0 − Ey0
where φ = φy – φx. Notice that you can rewrite this equation strictly in terms of the amplitude ratio
and the phase difference.
After solving for the tilt angle, you can determine the semi-major and semi-minor axis lengths.
Conceptually, you rotate the ellipse clockwise by the tilt angle and measure the lengths of the
intersections of the ellipse with the x- and y-axes. The point of intersection with the larger value is
the semi-major axis, a, and the one with the smaller value is the semi-minor axis, b.
The axial ratio is defined as AR = a/b and, by construction, is always greater than or equal to one. The
ellipticity angle is defined by
b
tanε = ∓
a
Ey0
tanα =
Ex0
sin2ε = sin2αsinϕ
11-8
Polarized Fields
Both the axial ratio and ellipticity angle are defined from the amplitude ratio and phase difference
and are independent of the overall magnitude of the field.
Rotation Sense
For elliptic polarization, just as with circular polarization, you need another parameter to completely
describe the ellipse. This parameter must provide the rotation sense or the direction that the tip of
the electric (or magnetic vector) moves in time. The rate of change of the angle that the field vector
makes with the x-axis is proportion to –sin φ where φ is the phase difference. If sin φ is positive, the
rate of change is negative, indicating that the field has left-handed polarization. If sin φ is negative,
the rate of change is positive or right-handed polarization.
The function polellip lets you find the values of the parameters of the polarization ellipse from
either the field component vector fv=[Ey;Ex] or the polarization ratio, p.
fv=[Ey;Ex];
[tau,epsilon,ar,rs] = polellip(fv);
p = polratio(fv);
[tau,epsilon,ar,rs] = polellip(p);
The variables tau, epsilon, ar and rs represent the tilt angle, ellipticity angle, axial ratio and
rotation sense, respectively. Both syntaxes give the same result.
This table summaries several different common polarization states and the values of the amplitudes,
phases, and polarization ratio that produce them:
In this equation, the positive sign is for the LHCP field and the negative sign is for the RHCP field.
These two special combinations can be given a new name. Define a new basis vector set, called the
circular basis set
11-9
11 Using Polarization
1
er = (ex − iey)
2
1
el = (ex + iey)
2
You can express any polarized field in terms of the circular basis set instead of the linear basis set.
Conversely, you can also write the linear polarization basis in terms of the circular polarization basis
1
ex = (er + el)
2
1
ey = (er − el)
2i
Any general elliptic field can be written as a combination of circular basis vectors
E = Elel + Er er
Jones Vector
The polarized field is orthogonal to the wave’s direction of propagation. Thus, the field can be
completely specified by the two complex components of the electric field vector in the plane of
polarization. The formulation of a polarized wave in terms of two-component vectors is called the
Jones vector formulation. The Jones vector formulation can be expressed in either a linear basis or a
circular basis or any basis. This table shows the representation of common polarizations in a linear
basis and circular basis.
Common Polarizations Jones Vector in Linear Basis Jones Vector in Circular Basis
Vertical [0;1] 1/sqrt(2)*[-1;1]
Horizontal [1;0] 1/sqrt(2)*[1;1]
45° Linear 1/sqrt(2)*[1;1] 1/sqrt(2)*[1-1i;1+1i]
135° Linear 1/sqrt(2)*[1;-1] 1/sqrt(2)*[1+1i;1-1i]
Right Circular 1/sqrt(2)*[1;-1i] [0;1]
Left Circular 1/sqrt(2)*[1;1i] [1;0]
The measurable intensities are the Stokes parameters, S0, S1, S2, and S3. The first Stokes parameter,
S0, describes the total intensity of the field. The second parameter, S1, describes the preponderance
of linear horizontally polarized intensity over linear vertically polarized intensity. The third parameter,
S2, describes the preponderance of linearly +45° polarized intensity over linearly 135° polarized
intensity. Finally, S3 describes the preponderance of right circularly polarized intensity over left
circularly polarized intensity. The Stokes parameters are defined as
11-10
Polarized Fields
2 2
S0 = Ex0 + Ey0
2 2
S1 = Ex0 − Ey0
S2 = 2Ex0Ey0cosϕ
S3 = 2Ex0Ey0sinϕ
For completely polarized fields, you can show by time averaging the polarization ellipse equation that
2 2 2 2
S0 = S1 + S2 + S3
For partially polarized fields, in contrast, the Stokes parameters satisfy the inequality
2 2 2 2
S0 < S1 + S3 + S3
The Stokes parameters are related to the tilt and ellipticity angles, τ and ε
S1 = S0cos2τcos2ε
S2 = S0sin2τcos2ε
S3 = S0sin2ε
and inversely by
S2
tan2τ =
S1
S3
sin2ε =
S0
After you measure the Stokes’ parameters, the shape of the ellipse is completely determined by the
preceding equations.
The two-dimensional Poincaré sphere can help you visualize the state of a polarized wave. Any point
on or in the sphere represents a state of polarization determined by the four Stokes parameters, S0,
S1, S2, and S3. On the Poincaré sphere, the angle from the S1-S2 plane to a point on the sphere is
twice the ellipticity angle, ε. The angle from the S1- axis to the projection of the point into the S1-S2
plane is twice the tilt angle, τ.
11-11
11 Using Polarization
As an example, solve for the Stokes parameters of a RHCP field, fv=[1,-i], using the stokes
function.
S = stokes(fv)
S =
2
0
0
-2
For transmitting antennas, the shape of the antenna is chosen to enhance the power projected into a
given direction. For receiving antennas, you choose the shape of the antenna to enhance the power
11-12
Polarized Fields
received from a particular direction. Often, many transmitting antennas or receiving antennas are
formed into an array. Arrays increase the transmitted power for a transmitting system or the
sensitivity for a receiving system. They improve directivity over a single antenna.
Each antenna or array has an associated local Cartesian coordinate system (x,y,z) as shown in the
following figure. See “Global and Local Coordinate Systems” on page 10-17 for more information. The
local coordinate system can also be represented by a spherical coordinate system using azimuth,
elevation and range coordinates, az, el, r, or alternately written, (φ,θ,r), as shown. At each point in
the far field, you can create a set of unit spherical basis vectors, e H, e V , r . The basis vectors are
aligned with the (φ,θ,r) directions, respectively. In the far field, the electric field is orthogonal to the
unit vector r . The components of a polarized field with respect to this basis, (EH,EV), are called the
horizontal and vertical components of the polarized field. In radar, it is common to use (H,V) instead
of (x,y) to denote the components of a polarized field. In the far field, the polarized electric field takes
the form
eikr eikr
E = F(ϕ, θ) = FH(ϕ, θ)e H + FV (ϕ, θ)e V
r r
In this equation, the quantity F(φ,θ) is called the vector radiation pattern of the source and contains
the angular dependence of the field in the far-field region.
The simplest polarized antenna is the dipole antenna which consist of a split length of wire coupled at
the middle to a coaxial cable. The simplest dipole, from a mathematical perspective, is the Hertzian
dipole, in which the length of wire is much shorter than a wavelength. A diagram of the short dipole
antenna of length L appears in the next figure. This antenna is fed by a coaxial feed which splits into
two equal length wires of length L/2. The current, I, moves along the z-axis and is assumed to be the
same at all points in the wire.
11-13
11 Using Polarization
Er = 0
EH = 0
iZ0IL e−ikr
EV = − cosel
2λ r
The next example computes the vertical and horizontal polarization components of the field. The
vertical component is a function of elevation angle and is axially symmetric. The horizontal
component vanishes everywhere.
The toolbox lets you model a short dipole antenna using the
phased.ShortDipoleAntennaElement System object.
Short-Dipole Polarization Components
Compute the vertical and horizontal polarization components of the field created by a short-dipole
antenna pointed along the z-direction. Plot the components as a function of elevation angle from 0° to
360°.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
11-14
Polarized Fields
antenna = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[1,2]*1e9,'AxisDirection','Z');
Compute the antenna response. Because the elevation angle argument to antenna is restricted to
±90°, compute the responses for 0° azimuth and then for 180° azimuth. Combine the two responses
in the plot. The operating frequency of the antenna is 1.5 GHz.
el = [-90:90];
az = zeros(size(el));
fc = 1.5e9;
resp = antenna(fc,[az;el]);
az = 180.0*ones(size(el));
resp1 = antenna(fc,[az;el]);
figure(1)
subplot(121)
polar(el*pi/180.0,abs(resp.V.'),'b')
hold on
polar((el+180)*pi/180.0,abs(resp1.V.'),'b')
str = sprintf('%s\n%s','Vertical Polarization','vs Elevation Angle');
title(str)
hold off
subplot(122)
polar(el*pi/180.0,abs(resp.H.'),'b')
hold on
polar((el+180)*pi/180.0,abs(resp1.H.'),'b')
str = sprintf('%s\n%s','Horizontal Polarization','vs Elevation Angle');
title(str)
hold off
11-15
11 Using Polarization
You can use a cross-dipole antenna to generate circularly-polarized radiation. The crossed-dipole
antenna consists of two identical but orthogonal short-dipole antennas that are phased 90° apart. A
diagram of the crossed dipole antenna appears in the following figure. The electric field created by a
crossed-dipole antenna constructed from a y-directed short dipole and a z-directed short dipole has
the form
Er = 0
iZ0IL e−ikr
EH = − cosaz
2λ r
iZ0IL e−ikr
EV = (sinelsinaz + icosel)
2λ r
The polarization ratio EV/EH, when evaluated along the x-axis, is just –i which means that the
polarization is exactly RHCP along the x-axis. It is predominantly RHCP when the observation point is
close to the x-axis. Moving away from the x-axis, the field becomes a mixture of LHCP and RHCP
polarizations. Along the –x-axis, the field is LHCP polarized. The figure illustrates, for a point near the
x, that the field is primarily RHCP.
11-16
Polarized Fields
This example plots the right-hand and left-hand circular polarization components of fields generated
by a crossed-dipole antenna at 1.5 GHz. You can see how the circular polarization changes from pure
RHCP at 0 degrees azimuth angle to pure LHCP at 180 degrees azimuth angle, both at 0 degrees
elevation angle.
Note: This example runs only in R2016b or later. If you are using an earlier release, replace each call
to the function with the equivalent step syntax. For example, replace myObject(x) with
step(myObject,x).
fc = 1.5e9;
antenna = phased.CrossedDipoleAntennaElement('FrequencyRange',[1,2]*1e9);
Compute the left-handed and right-handed circular polarization components from the antenna
response.
az = [-180:180];
el = zeros(size(az));
resp = antenna(fc,[az;el]);
cfv = pol2circpol([resp.H.';resp.V.']);
11-17
11 Using Polarization
clhp = cfv(1,:);
crhp = cfv(2,:);
polar(az*pi/180.0,abs(clhp))
hold on
polar(az*pi/180.0,abs(crhp))
title('LHCP and RHCP vs Azithmuth Angle')
legend('LHCP','RHCP')
hold off
You can create polarized fields from arrays by using polarized antenna elements as a value of the
Elements property of an array System object. All Phased Array System Toolbox arrays support
polarization.
11-18
Polarized Fields
observed. The exact way that the polarization changes depends upon the properties of the scattering
object. The quantity describing the response of an object to the incident field is called the radar
scattering cross-section matrix (RSCM), S. You can measure the scattering matrix as follows. When a
unit amplitude horizontally polarized wave is scattered, both a horizontal and a vertical scattered
component are produced. Call these two components SHH and SVH. These components are complex
numbers containing the amplitude and phase changes from the incident wave. Similarly, when a unit
amplitude vertically polarized wave is scattered, the horizontal and vertical scattered component
produced are SHV and SVV. Because, any incident field can be decomposed into horizontal and vertical
components, you can arrange these quantities into a matrix and write the scattered field in terms of
the incident field
(scat) (inc) (inc)
EH 4π SHH SVH EH 4π EH
= 2
= 2
S
EV
(scat) λ SHV SVV EV(inc) λ (inc)
EV
In general, the scattering cross-section matrix depends upon the angles that the incident and
scattered fields make with the object. When the incident field is scattered back to the transmitting
antenna or, backscattered, the scattering matrix is symmetric.
Polarization Signature
To understand how the scattered wave depends upon the polarization of the incident wave, you need
to examine all possible scattered field polarizations for each incident polarization. Because this
amount of data is difficult to visualize, consider two cases:
• For the copolarization case, the scattered polarization has the same polarization as the incident
field.
• For the cross-polarization case, the scattered polarization has an orthogonal polarization to the
incident field.
You can represent the incident polarizations in terms of the tilt angle-ellipticity angle pair τ, ε . Every
unit incident polarization vector can be expressed as
(inc)
EH cosτ −sinτ cosε
=
(inc)
EV sinτ cosτ jsinε
When you have an RSCM matrix, S, form the copolarization signature by computing
(inc)
(co)
EH
(inc) (inc)
P = EH EV *S
(inc)
EV
where []* denotes complex conjugation. To obtain the cross-polarization signature, compute
(inc)
EH
P(cross) = EH
(inc) ⊥ (inc) ⊥
EV *S
(inc)
EV
11-19
11 Using Polarization
You can compute both the copolarization and cross polarization signatures using the polsignature
function. This function returns the absolute value of the scattered power (normalized by its maximum
value). The next example shows how to plot the polarization signatures for the RSCM matrix
1
2i
2
S=
1
i
2
for all possible incident polarizations. The range of values of the ellipticity angle and tilt span the
entire possible range of polarizations.
2i 0 . 5
.
0 . 5 −i
Specify the scattering matrix. and specify the range of ellipticity angles and orientation (tilt) angles
that define the polarization states. These angles cover all possible incident polarization states.
rscmat = [1i*2,0.5;0.5,-1i];
el = [-45:45];
tilt = [-90:90];
polsignature(rscmat,'c',el,tilt)
11-20
Polarized Fields
polsignature(rscmat,'x',el,tilt)
11-21
11 Using Polarization
• The polarization loss is computed from the projection (or dot product) of the transmitted field’s
electric field vector onto the receiver polarization vector.
• Loss occurs when there is a mismatch in direction of the two vectors, not in their magnitudes.
• The polarization loss factor describes the fraction of incident power that has the correct
polarization for reception.
Using the transmitter’s spherical basis at the receiver’s position, you can represent the incident
electric field, (EiH, EiV), by
You can represent the receiver’s polarization vector, (PH, PV), in the receiver’s local spherical basis by:
P = PH e′H + PV e′V
The next figure shows the construction of the transmitter and receiver spherical basis vectors.
11-22
Polarized Fields
2
Ei ⋅ P
ρ= 2 2
Ei P
and varies between 0 and 1. Because the vectors are defined with respect to different coordinate
systems, they must be converted to the global coordinate system to form the projection. The toolbox
function polloss computes the polarization mismatch between an incident field and a polarized
antenna.
To achieve maximum output power from a receiving antenna, the matched antenna polarization
vector must be the complex conjugate of the incoming field’s polarization vector. As an example, if the
1
incoming field is RHCP, with polarization vector given by er = (ex − iey), the optimum receiver
2
antenna polarization is LHCP. The introduction of the complex conjugate is needed because field
polarizations are described with respect to its direction of propagation, whereas the polarization of a
receive antenna is usually specified in terms of the direction of propagation towards the antenna. The
complex conjugate corrects for the opposite sense of polarization when receiving.
As an example, if the transmitting antenna transmits an RHCP field, the polarization loss factors for
various received antenna polarizations are
11-23
11 Using Polarization
Radar Definition
Set up the radar operating parameters. The existing radar design meets the following specifications.
Set the pulse repetition interval, PRI, and pulse repetition frequency, PRF, based on the maximum
unambiguous range.
PRI = 2*max_range/c;
PRF = 1/PRI;
Transmitted Signal
11-24
Polarized Fields
antenna = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[5e9,10e9],'AxisDirection','Z');
Define a 31-by-31 Taylor tapered uniform rectangular array using the phased.URA System object.
Set the size of the array using the number of rows, numRows, and the number of columns, numCols.
The distance between elements, d, is slightly smaller than one-half the wavelength, lambda. Compute
the array taper, tw, using separate Taylor windows for the row and column directions. Obtain the
Taylor weights using the taylorwin function. Plot the 3-D array response using the array pattern
method.
numCols = 31;
numRows = 31;
lambda = c/fc;
d = 0.9*lambda/2; % Nominal spacing
wc = taylorwin(numCols);
wr = taylorwin(numRows);
tw = wr*wc';
array = phased.URA('Element',antenna,'Size',[numCols,numRows],...
'ElementSpacing',[d,d],'Taper',tw);
pattern(array,fc,[-180:180],[-90:90],'CoordinateSystem','polar','Type','powerdb',...
'Polarization','V');
11-25
11 Using Polarization
Next, set the position and motion of the radar platform in the phased.Platform System object. The
radar is assumed to be stationary and positioned at the origin. Set the Velocity property to
[0,0,0] and the InitialPosition property to [0,0,0]. Set the InitialOrientationAxes
property to the identity matrix to align the radar platform coordinate axes with the global coordinate
system.
In radar, the signal propagates in the form of an electromagnetic wave. The signal is radiated and
collected by the antennas used in the radar system. Associate the array with a radiator System object,
phased.Radiator, and two collector System objects, phased.Collector. Set the
WeightsInputPort property of the radiator to true to enable dynamic steering of the transmitted
signal at each execution of the radiator. Creating the two collectors allows for collection of both
horizontal and vertical polarization components.
radiator = phased.Radiator('Sensor',array,'OperatingFrequency',fc,...
'PropagationSpeed',c,'CombineRadiatedSignals',true,...
'Polarization','Combined','WeightsInputPort',true);
collector1 = phased.Collector('Sensor',array,'OperatingFrequency',fc,...
'PropagationSpeed',c,'Wavefront','Plane','Polarization','Combined',...
'WeightsInputPort',false);
collector2 = phased.Collector('Sensor',array,'OperatingFrequency',fc,...
'PropagationSpeed',c,'Wavefront','Plane','Polarization','Combined',...
'WeightsInputPort',false);
Estimate the peak power needed in the phased.Transmitter System object to calculate the
desired radiated power levels. The transmitted peak power is the power required to achieve a
minimum-detection SNR, snr_min. You can determine the minimum SNR from the probability of
detection,|pd|, and the probability of false alarm, pfa, using the albersheim function. Then,
compute the peak power from the radar equation using the radareqpow function. Among the inputs
to this function are the overall signal gain, which is the sum of the transmitting element gain,
TransmitterGain and the array gain, AG. Another input is the maximum detection range,
rangegate. Finally, you need to supply a target cross-section value, tgt_rcs. A scalar radar cross
section is used in this code section as an approximation even though the full polarization computation
later uses a 2-by-2 radar cross section scattering matrix.
Estimate the total transmitted power to achieve a required detection SNR using all the pulses.
The SNR has contributions from the transmitting element gain as well as the array gain. Compute
first an estimate of the array gain, then add the array gain to the transmitter gain to get the peak
power which achieves the desired SNR.
• Use an approximate target cross section of 1.0 for the radar equation even though the analysis
calls for the full scattering matrix.
• Set the maximum range to be equal to the value of 'rangegate' since targets outside that range are
of no interest.
• Compute the array gain as 10*log10(number of elements)
• Assume each element has a gain of 20 dB.
11-26
Polarized Fields
Define Target
We want to simulate the pulse returns from a target that is rotating so that the scattering cross-
section matrix changes from pulse to pulse. Create a rotating target object and a moving target
platform. The rotating target is represented later as an angle-dependent scattering matrix. Rotation
is in degrees per second.
targetSpeed = 1000;
targetVec = [-1;1;0]/sqrt(2);
target = phased.RadarTarget('EnablePolarization',true,...
'Mode','Monostatic','ScatteringMatrixSource','Input port',...
'OperatingFrequency',fc);
targetPlatformAxes = [1 0 0;0 1 0;0 0 1];
targetRotRate = 45;
targetplatform = phased.Platform('InitialPosition',[3500.0; 0; 0],...
'Velocity', targetSpeed*targetVec);
Signal propagation
Because the reflected signals are received by an array, use a beamformer pointing to the steering
direction to obtain the combined signal.
steeringvector = phased.SteeringVector('SensorArray',array,'PropagationSpeed',c,...
'IncludeElementResponse',false);
beamformer = phased.PhaseShiftBeamformer('SensorArray',array,...
'OperatingFrequency',fc,'PropagationSpeed',c,...
'DirectionSource','Input port');
channel = phased.FreeSpace('SampleRate',fs,...
'TwoWayPropagation',true,'OperatingFrequency',fc);
% Define a receiver with receiver noise
amplifier = phased.ReceiverPreamp('Gain',20,'LossFactor',0,'NoiseFigure',1,...
'ReferenceTemperature',290,'SampleRate',fs,'EnableInputPort',true,...
'PhaseNoiseInputPort',false,'SeedSource','Auto');
For such a large PRI and sampling rate, there will be too many samples per element. This will cause
problems with the collector which has 961 channels. To keep the number of samples manageable, set
a maximum range of 5 km. We know that the target is within this range.
11-27
11 Using Polarization
This set of axes specifies the direction of the local coordinate axes with respect to the global
coordinate system. This is the orientation of the target.
Processing Loop
sig_max_V = zeros(1,numpulses);
sig_max_H = zeros(1,numpulses);
tm_V = zeros(1,numpulses);
tm_H = zeros(1,numpulses);
After all the System objects are created, loop over the number of pulses to create the reflected
signals.
maxsamp = ceil(tmax*fs);
fast_time_grid = [0:(maxsamp-1)]/fs;
rotangle = 0.0;
for m = 1:numpulses
x = waveform(); % Generate pulse
% Capture only samples within range gated
x = x(1:maxsamp);
[s, tx_status] = transmitter(x); % Create transmitted pulse
% Move the radar platform and target platform.
[radarPos,radarVel] = radarplatform(1/PRF);
[targetPos,targetVel] = targetplatform(1/PRF);
% Compute the known target angle
[targetRng,targetAng] = rangeangle(targetPos,...
radarPos,...
radarPlatformAxes);
% Compute the radar angle with respect to the target axes.
[radarRng,radarAng] = rangeangle(radarPos,...
targetPos,...
targetPlatformAxes);
% Calculate the steering vector designed to track the target
sv = steeringvector(fc,targetAng);
% Radiate the polarized signal toward the targat
tsig1 = radiator(s,targetAng,radarPlatformAxes,conj(sv));
% Compute the two-way propagation loss (4*pi*R/lambda)^2
tsig2 = channel(tsig1,radarPos,targetPos,radarVel,targetVel);
% Create a very simple model of a changing scattering matrix
scatteringMatrix = [cosd(rotangle),0.5*sind(rotangle);...
0.5*sind(rotangle),cosd(rotangle)];
rsig1 = target(tsig2,radarAng,targetPlatformAxes,scatteringMatrix); % Reflect off target
% Collect the vertical component of the radiation.
rsig3V = collector1(rsig1,targetAng,radarPlatformAxes);
% Collect the horizontal component of the radiation. This
% second collector is rotated around the x-axis to be more
% sensitive to horizontal polarization
rsig3H = collector2(rsig1,targetAng,rotx(90)*radarPlatformAxes);
% Add receiver noise to both sets of signals
rsig4V = amplifier(rsig3V,~(tx_status>0)); % Receive signal
rsig4H = amplifier(rsig3H,~(tx_status>0)); % Receive signal
% Beamform the signal
rsigV = beamformer(rsig4V,targetAng); % Beamforming
rsigH = beamformer(rsig4H,targetAng); % Beamforming
% Find the maximum returns for each pulse and store them in
% a vector. Store the pulse received time as well.
11-28
Polarized Fields
[sigmaxV,imaxV] = max(abs(rsigV));
[sigmaxH,imaxH] = max(abs(rsigH));
sig_max_V(m) = sigmaxV;
sig_max_H(m) = sigmaxH;
tm_V(m) = fast_time_grid(imaxV) + (m-1)*PRI;
tm_H(m) = fast_time_grid(imaxH) + (m-1)*PRI;
11-29
12
In radar and sonar applications, the interactions between fields and targets take place in the far-field
region, often called the Fraunhofer region. The far-field region is defined as the region for which
r≫L2/λ
where L represents the largest dimension of the source. In the far-field region, the fields take a
special form: they can be written as the product of a function of direction (such as azimuth and
elevation angles) and a geometric fall-off function, 1/r. It is the angular function that is called the
radiation pattern, response pattern, or simply pattern.
Radiation patterns can be viewed as field patterns or as power patterns. The terms “field” or “power”
are often added to be more specific: contrast element field pattern versus element power pattern. The
radiation power pattern describes the radiant intensity of a field, U, as a function of direction.
Radiant intensity units are watts/steradian. Sometimes, radiant intensity is confused with power
density. Power density, I, is the energy passing through a unit area in a unit time. Units for power
density are Watts/square meter. Unfortunately, in some disciplines, power density is sometimes called
intensity. This document always uses radiant intensity instead of intensity to avoid confusion. For a
point source, the radiant intensity is the power density multiplied by the square of the distance from
the source, U = r2I.
The element field response or element field pattern represents the angular distribution of the
electromagnetic field create by an antenna, E(θ,φ), or the scalar acoustic field, p(θ,φ), generated by
an acoustic transducer such as a speaker or hydrophone. Because the far field electromagnetic field
consists of horizontal and vertical components orthogonal, (EH(θ,φ), EV(θ,φ)) there can be different
patterns for each component. Acoustic fields are scalar fields so there is only one pattern. The
general form of any field or field component is
e−ikr
A f (θ, ϕ)
r
12-2
Element and Array Radiation and Response Patterns
where A is a nominal field amplitude and f(θ,φ) is the normalized field pattern (normalized to unity).
Because the field patterns are evaluated at some reference distance from the source, the fields
returned by the element step method are represented simply as A f(θ,φ). You can display the nominal
element field pattern by invoking the element pattern method and then choosing the 'Type'
parameter value as 'efield' and setting the 'Normalize' parameter to false.
pattern(elem,'Normalize',false,'Type','efield');
You can view the normalized field pattern by setting the 'Normalize' parameter value to true. For
example, if EH(θ,φ) is the horizontal component of the complex electromagnetic field, the normalized
field pattern has the form |EH(θ,φ)/EH,max|.
pattern(elem,'Polarization','H','Normalize',true,'Type','efield');
The element power response (or element power radiation pattern) is defined as the angular
distribution of the radiant intensity in the far field, Urad(θ,φ). When the elements are used for
reception, the patterns are interpreted as the sensitivity of the element to radiation arriving from
direction (θ,φ) and the power pattern represents the output voltage power of the element as a
function of wave arrival direction.
Physically, the radiant intensity for the electromagnetic field produced by an antenna element is
r2 2 2
Urad(θ, ϕ) = E + EV
2Z0 H
where Z0 is the characteristic impedance of free space. The radiant intensity of an acoustic field is
2
r2
Urad(θ, ϕ) = p
2Z
where Z is the characteristic impedance of the acoustic medium. For the fields produced by the
Phased Array System Toolbox element System objects, the radial dependence, the impedances, and
the field magnitudes are all collected in the nominal field amplitudes defined above. Then the radiant
intensity can generally be written
2
Urad(θ, ϕ) = Af (θ, ϕ)
The radiant intensity pattern is the quantity returned by the element pattern method when the
'Normalize' parameter is set to false and the 'Type' parameter is set to 'power' (or
'powerdb' for decibels).
pattern(elem,'Normalize',false,'Type','power');
The normalized power pattern is defined as the radiant intensity divided by its maximum value
Urad(θ, ϕ) 2
Unorm(θ, ϕ) = = f (θ, ϕ)
Urad, max
The pattern method returns a normalized power pattern when the 'Normalize' parameter is set
to true and the 'Type' parameter is set to 'power' (or 'powerdb' for decibels).
pattern(elem,'Normalize',true,'Type','power');
12-3
12 Antenna and Array Definitions
Element Directivity
Element directivity measures the capability of an antenna or acoustic transducer to radiate or receive
power preferentially in a particular direction. Sometimes it is referred to as directive gain. Directivity
is measured by comparing the transmitted radiant intensity in a given direction to the transmitted
radiant intensity of an isotropic radiator having the same total transmitted power. An isotropic
radiator radiates equal power in all directions. The radiant intensity of an isotropic radiator is just the
total transmitted power divided by the solid angle of a sphere, 4π,
iso Ptotal
Urad(θ, ϕ) =
4π
Urad(θ, ϕ) Urad(θ, ϕ)
D(θ, ϕ) = = 4π
iso
Urad Ptotal
By this definition, the integral of the directivity over a sphere surrounding the element is exactly 4π.
Directivity is related to the effective beamwidth of an element. Start with an ideal antenna that has a
uniform radiation field over a small solid angle (its beamwidth), ΔΩ, in a particular direction, and zero
outside that angle. The directivity is
Urad(θ, ϕ) 4π
D(θ, ϕ) = 4π =
Ptotal ΔΩ
The radiant intensity can be expressed in terms of the directivity and the total power
1
Urad(θ, ϕ) = D(θ, ϕ)Ptotal
4π
As an example, the directivity of the electric field of a z-oriented short-dipole antenna element is
3
D(θ, ϕ) = cos2θ
2
with a peak value of 1.5. Often, the largest value of D(θ,φ) is specified as an antenna operating
parameter. The direction in which D(θ,φ) is largest is the direction of maximum power radiation. This
direction is often called the boresight direction. In some of the literature, the maximum value itself is
called the directivity, reserving the phrase directive gain for what is called here directivity. For the
short-dipole antenna, the maximum value of directivity occurs at θ = 0, independent of φ, and attains
a value of 3/2. The concept of directivity applies to receiving antennas as well. It describes the output
power as a function of the arrival direction of a plane wave impinging upon the antenna. By
reciprocity, the directivity of a receiving antenna is the same as the directivity when used as a
transmitting antenna. A quantity closely related to directivity is element gain. The definition of
directivity assumes that all the power fed to the element is radiated to space. In reality, system losses
reduce the radiant intensity by some factor, the element efficiency, η. The term Ptotal becomes the
power supplied to the antenna and Prad becomes the power radiated into space. Then, Prad = ηPtotal.
The element gain is
Urad(θ, ϕ) Urad(θ, ϕ)
G(θ, ϕ) = 4π = 4πη = ηD(θ, ϕ)
Ptotal Prad
12-4
Element and Array Radiation and Response Patterns
and represents the power radiated away from the element compared to the total power supplied to
the element.
Using the element pattern method, you can plot the directivity of an element by setting the 'Type'
parameter to 'directivity',
pattern(elem,'Type','directivity');
When individual antenna elements are aggregated into arrays of elements, new response/radiation
patterns are created which depend upon both the element patterns and the geometry of the array.
These patterns are called beampatterns to reflect the fact that the pattern can be constructed to have
a narrow angular distribution, that is, a beam. This term is used for an array in transmitting or
receiving modes. Most often, but not always, the array consists of identical antennas. The identical
antenna case is interesting because it lets us partition the radiation pattern into two components: one
component describes the element radiation pattern and the second describes the array radiation
pattern.
Just as an array of transmitting elements has a radiation pattern, an array of receiving elements has a
response pattern which describes how the output voltage of the array changes with the direction of
arrival of a plane incident wave. By reciprocity, the response pattern is identical to the radiation
pattern.
For transmitting arrays, the voltage driving the elements can be phase-adjusted to allow the
maximum radiant intensity to be transmitted in a particular direction. For receiving arrays, the
arriving signals can be phase-adjusted to maximize the sensitivity in a particular direction.
Start with a simple model of the radiation field produced by a single antenna which is given by
e−ikr
y(θ, ϕ, r) = Af (θ, ϕ)
r
where A is the field amplitude and f((θ,φ) is the normalized element field pattern. This field can
represent any of the components of the electric field, a scalar field, or an acoustic field. For an array
of identical elements, the output of the array is the weighted sum of the individual elements, using
the complex weights, wm
M−1 −ikrm
e
z(θ, ϕ, r) = A ∑ * f (θ, ϕ)
wm
rm
m=0
where rm is the distance from the mth element source point to the field point. In the far-field region,
this equation takes the form
M−1
e−ikr −iku · xm
z(θ, ϕ, r) = A f (θ, ϕ) ∑ wm
*e
r m=0
where xm are the vector positions of the array elements with respect to the array origin. u is the unit
vector from the array origin to the field point. This equation can be written compactly is the form
e−ikr
z(θ, ϕ, r) = A f (θ, ϕ)w Hs
r
12-5
12 Antenna and Array Definitions
The term wHs is called the array factor, Farray(θ,φ). The vector s is the steering vector (or array
manifold vector) for directions of propagation for transmit arrays or directions of arrival for receiving
arrays
iku · xm
s(θ, ϕ) = …, e ,…
The total array pattern consists of an amplitude term, an element pattern, f(θ,φ), and an array factor,
Farray(θ,φ). The total angular behavior of the array pattern, B(θ,φ), is called the beampattern of the
array
When evaluated at the reference distance, the array field pattern has the form
The pattern method, when the 'Normalize' parameter is set to false and the 'Type' parameter
is set to 'efield', returns the magnitude of the array field pattern at the reference distance.
pattern(array,'Normalize',false,'Type','efield');
When the 'Normalize' parameter is set to true, the pattern method returns a pattern normalized
to unity.
pattern(array,'Normalize',true,'Type','efield');
The pattern method, when the 'Normalize' parameter is set to false and the 'Type' parameter
is set to 'power' or 'powerdb', returns the array power pattern at the reference distance.
pattern(array,'Normalize',false,'Type','power');
When the 'Normalize' parameter is set to true, the pattern method returns the power pattern
normalized to unity.
pattern(array,'Normalize',true,'Type','power');
For the conventional beamformer, the weights are chosen to maximize the power transmitted towards
a particular direction, or in the case of receiving arrays, to maximize the response of the array for a
particular arrival direction. If u0 is the desired pointing direction, then the weights which maximize
the power and response in this direction have the general form
−iku0 · xm
w = wm e
12-6
Element and Array Radiation and Response Patterns
Array Directivity
Array directivity is defined the same way as element directivity: the radiant intensity in a specific
direction divided by the isotropic radiant intensity. The isotropic radiant intensity is the array total
radiated power divided by 4π. In terms of the arrays weights and steering vectors, the directivity can
be written as
2
Af (θ, ϕ)w Hs
D(θ, ϕ) = 4π
Ptotal
where Ptotal is the total radiated power from the array. In a discrete implementation, the total radiated
power can be computed by summing radiant intensity values over a uniform grid of angles that
covers the full sphere surrounding the array
M−1N−1 2
2π2
MN m∑ ∑ Af (θm, ϕn)wHs(θm, ϕn) cosθm
Ptotal =
=0n=0
where M is the number of elevation grid points and N is the number of azimuth grid points.
Because the radiant intensity is proportional to the beampattern, B(θ,φ), the directivity can also be
written in terms of the beampattern
2
B(θ, ϕ)
D(θ, ϕ) = 4π
∫ B(θ, ϕ)
2
cosθdθdϕ
You can plot the directivity of an array by setting the 'Type' parameter of the pattern methods to
'directivity',
pattern(array,'Type','directivity');
Array Gain
In the Phased Array System Toolbox, array gain is defined to be the array SNR gain. Array gain
measures the improvement in SNR of a receiving array over the SNR for a single element. Because
an array is a spatial filter, the array SNR depends upon the spatial properties of the noise field. When
the noise is spatially isotropic, the array gain takes a simple form
2
SNRarray w Hs
G= =
SNRelement w Hw
In addition, for an array with uniform weights, the array gain for an N-element array has a maximum
value at boresight of N (or 10logN in db).
12-7
12 Antenna and Array Definitions
Assume the operating frequency of the array is 10 kHz. All elements are omnidirectional microphone
elements. Steer the array in the direction 20 degrees in azimuth and 30 degrees in elevation. The
speed of sound in air is 344.21 m/s at 21 deg C.
cair = 344.21;
f = 10.0e3;
lambda = cair/f;
microphone = phased.OmnidirectionalMicrophoneElement(...
'FrequencyRange',[20 20000]);
array = phased.URA('Element',microphone,'Size',[11,9],...
'ElementSpacing',0.5*lambda*[1,1]);
plotGratingLobeDiagram(array,f,[20;30],cair);
Plot the grating lobes. The main lobe of the array is indicated by a filled black circle. The grating
lobes in visible and nonvisible regions are indicated by unfilled black circles. The visible region is the
region in u-v coordinates for which u2 + v2 ≤ 1. The visible region is shown as a unit circle centered
at the origin. Because the array spacing is less than one-half wavelength, there are no grating lobes
in the visible region of space. There are an infinite number of grating lobes in the nonvisible regions,
but only those in the range [-3,3] are shown.
12-8
Grating Lobe Diagram for Microphone URA
The grating-lobe free region, shown in green, is the range of directions of the main lobe for which
there are no grating lobes in the visible region. In this case, it coincides with the visible region.
The white areas of the diagram indicate a region where no grating lobes are possible.
12-9
13
Sonar Equation
The sonar equation is used in underwater signal processing to relate received signal power to
transmitted signal power for one-way or two-way sound propagation. The equation computes the
received signal-to-noise ratio (SNR) from the transmitted signal level, taking into account
transmission loss, noise level, sensor directivity, and target strength. The sonar equation serves the
same purpose in sonar as the radar equation does in radar. The sonar equation has different forms for
passive sonar and active sonar.
The source level (SL) is the ratio of the transmitted intensity from the source to a reference intensity,
converted to dB:
Is
SL = 10log
Iref
where Is is the intensity of the transmitted signal measured at 1 m distance from the source. The
reference intensity, Iref, is the intensity of a sound wave having a root mean square (rms) pressure of
1 μPa. Source level is sometimes written in dB// 1 μPa, but actually is referenced to the intensity of a
1 μPa signal. The relation between intensity and pressure is
2
prms
I=
ρc
where ρ is the density of seawater, (approximately 1000 kg/m3), c is the speed of sound
(approximately 1500 m/s). 1 μPa is equivalent to an intensity of Iref = 6.667 ✕ 10-19 W/m2
Sometimes, it is useful to compute the source level from the transmitted power, P. Assuming a
nondirectional (isotropic) source, the intensity at one meter from the source is
P
I=
4π
I P
SL = 10log10 = 10log10 = 10log10P − 10log104πIref = 10log10P + 170.8
Iref 4πIref
When source level is defined at one yard instead of one meter, the final constant in this equation is
171.5.
I
SL = 10log10 = 10log10P + 170.8 + DIsrc
Iref
13-2
Sonar Equation
where DIsrc is the directivity of the source. Source directivity is not explicitly included in the sonar
equation.
The sonar equation includes the directivity index of the receiver (DI). Directivity is the ratio of the
total noise power at the array to the noise received by the array along its main response axis.
Directivity improves the signal-to-noise ratio by reducing the total noise. See “Element and Array
Radiation and Response Patterns” on page 12-2 for discussions of directivity.
Transmission loss is the attenuation of sound intensity as the sound propagates through the
underwater channel. Transmission loss (TL) is defined as the ratio of sound intensity at 1 m from a
source to the sound intensity at distance R.
Is
TL = 10log
I(R)
There are two major contributions to transmission loss. The larger contribution is geometrical
spreading of the sound wavefront. The second contribution is absorption of the sound as it
propagates. There are several absorption mechanisms.
In an infinite medium, the wavefront expands spherically with distance, and attenuation follows a
1/R2 law, where R is the propagation distance. However, the ocean channel has a surface and a
bottom. Because of this, the wavefronts expand cylindrically when they are far from the source and
follow a 1/R law. Near the source, the wavefronts still expand spherically. There must be a transition
region where the spreading changes from spherical to cylindrical. In Phased Array System Toolbox
sonar models, the transition region as a single range and ensures that the transmission loss is
continuous at that range. Authors define the transition range differently. Here, the transition range,
Rtrans, is one-half the depth, D, of the channel. The geometric transmission loss for ranges less than
the transition range is
TLgeom = 20log10R
For ranges greater than the transition depth, the geometric transmission loss is
In Phased Array System Toolbox, the transition range is one-half the channel depth, H/2.
The absorption loss model has three components: viscous absorption, the boric acid relaxation
process, and the magnesium sulfate relaxation process. All absorption components are modeled by
linear dependence on range, αR.
Viscous absorption describes the loss of intensity due to molecular motion being converted to heat.
Viscous absorption applies primarily to higher frequencies. The viscous absorption coefficient is a
function of frequency, f, temperature in Celsius, T, and depth, D:
in dB/km. This is the dominant absorption mechanism above 1 MHz. Viscous absorption increases
with temperature and depth.
13-3
13 Sonar System Models
The second mechanism for absorption is the relaxation process of boric acid. Absorption depends
upon the frequency in kHz, f, the salinity in parts per thousand (ppt), S, and temperature in Celsius,T.
The absorption coefficient (measured in dB/km) is
2
f 1f
αB = 0.106 2 2
e−(pH − 8)/0.56
f1 + f
f 1 = 0.78 S/35eT /26
in dB/km. f1 is the relaxation frequency of boric acid and is about 1.1 kHz at T = 10 °C and S = 35
ppt.
The third mechanism is the relaxation process of magnesium sulfate. Here, the absorption coefficient
is
2
T S f 2f
αM = 0.52 1 + e−D/6
43 35 f + f 2
2
2
f 2 = 42eT /17
in dB/km. f2 is the relaxation frequency of magnesium sulfate and is about 75.6 kHz at T = 10°C and
S = 35 ppt.
where R is the range in km. In Phased Array System Toolbox, all absorption models parameters are
fixed at T = 10, S = 35, and pH = 8. The model is implemented in range2tl. Because TL is a
monotonically increasing function of R, you can use the Newton-Raphson method to solve for R in
terms of TL. This calculation is performed in tl2range.
Noise level (NL) is the ratio of the noise intensity at the receiver to the same reference intensity used
for source level.
where 2TL is the two-way transmission loss (in dB) and TS is the target strength (in dB). The
transmission loss is calculated by computing the outbound and inbound transmission losses (in dB)
and adding them. In this toolbox, two-way transmission loss is twice the one-way transmission loss.
Target strength is the sonar analog of radar cross section. Target strength is the ratio of the intensity
of a reflected signal at 1 m from a target to the incident intensity, converted to dB. Using the
13-4
Sonar Equation
conservation of energy or, equivalently, power, the incident power on a target equals the reflected
power. The incident power is the incident signal intensity multiplied by an effective cross-sectional
area, σ. The reflected power is the reflected signal intensity multiplied by the area of a sphere of
radius R centered on the target. The ratio of the reflected power to the incident power is
Iincσ = Irefl4πR2
Irefl σ
= .
Iinc 4πR2
The reflected intensity is evaluated on a sphere of 1 m radius. The target strength coefficient (σ) is
referenced to an area 1 m2.
Irefl(1 meter) σ
TS = 10log10 = 10log10
Iinc 4π
References
[1] Ainslie M. A. and J.G. McColm. "A simplified formula for viscous and chemical absorption in sea
water." Journal of the Acoustical Society of America. Vol. 103, Number 3, 1998, pp.
1671--1672.
[2] Urick, Robert J. Principles of Underwater Sound, 3rd ed. Los Altos, CA: Peninsula Publishing,
1983.
13-5
13 Sonar System Models
c − vr
f ′ = f0
c
Frequency will increase because vr is negative. If the receiver is moving away from the source, the vr
is positive and the frequency decreases. A similar situation occurs when the source is moving and the
receiver is stationary. Then the frequency at the receiver is
c
f ′ = f0
c − vs
The frequency increases when vs is positive as the source moves toward the receiver. When vs is
negative, the frequency decreases. Both effects can be combined into
c − vr c c − vr 1 − vr c
f ′ = f0 = f0 = f0 .
c c − vs c − vs 1 − vs c
There is a difference in the Doppler formulas for sound versus electromagnetic waves. For sound, the
Doppler shift depends on both the source and receiver velocities. For electromagnetic waves, the
Doppler shift depends on the difference between the source and receiver velocities.
References
[1] Halliday, David, R. Resnick, and J. Walker, Fundamentals of Physics, 10th ed. Wiley, New York,
2013.
13-6
14
Code Generation
Code Generation
In this section...
“Code Generation Use and Benefits” on page 14-2
“Limitations Specific to Phased Array System Toolbox” on page 14-3
“General Limitations” on page 14-5
“Limitations for System Objects that Require Dynamic Memory Allocation” on page 14-9
In general, the code you generate using the toolbox is portable ANSI® C code. In order to use code
generation, you need a MATLAB Coder license. Using Phased Array System Toolbox software requires
licenses for both the DSP System Toolbox™ and the Signal Processing Toolbox™. See the “Get
Started with MATLAB Coder” (MATLAB Coder) page for more information.
Creating a MATLAB Coder MEX-file can lead to substantial acceleration of your MATLAB algorithms.
It is also a convenient first step in a workflow that ultimately leads to completely standalone code.
When you create a MEX-file, it runs in the MATLAB environment. Its inputs and outputs are available
for inspection just like any other MATLAB variable. You can use MATLAB visualization and other tools
for verification and analysis.
Within your code, you can run specific commands either as generated C code or by running using the
MATLAB engine. In cases where an isolated command does not yet have code generation support,
you can use the coder.extrinsic command to embed the command in your code. This means that
the generated code reenters the MATLAB environment when it needs to run that particular
command. This also useful if you wish to embed certain commands that cannot generate code (such
as plotting functions).
The simplest way to generate MEX-files from your MATLAB code is by using the codegen function at
the command line. Often, generating a MEX-files involves nothing more than invoking the coder
command on one of your existing functions. For example, if you have an existing function,
myfunction.m, you can type the commands at the command line to compile and run the MEX
function. codegen adds a platform-specific extension to this name. In this case, the "mex" suffix is
added.
codegen myfunction.m
myfunction_mex;
You can generate standalone executables that run independently of the MATLAB environment. You
can do this by creating a MATLAB Coder project inside the MATLAB Coder Integrated Development
Environment (IDE). Alternatively, you can issue the codegen command in the command line
environment with appropriate configuration parameters. To create a standalone executable, you must
14-2
Code Generation
write your own main.c or main.cpp function. See “Generating Standalone C/C++ Executables from
MATLAB Code” (MATLAB Coder) for more information.
Before using codegen to compile your code, you must set up your C/C++ compiler. For 32-bit
Windows platforms, MathWorks® supplies a default compiler with MATLAB. If your installation does
not include a default compiler, you can supply your own compiler. For the current list of supported
compilers, see Supported and Compatible Compilers on the MathWorks Web site. Install a compiler
that is suitable for your platform. Then, read “Setting Up the C or C++ Compiler” (MATLAB Coder).
After installation, at the MATLAB command prompt, run mex -setup. You can then use the codegen
function to compile your code.
Almost all Phased Array System Toolbox functions and System objects are supported for code
generation. For a list of supported functions and System objects, see “Functions and System Objects
Supported for C/C++ Code Generation” on page 14-14.
• When you employ antennas and arrays that produce polarized fields, the EnablePolarization
parameter for these System objects must be set to true:
• phased.Collector
• phased.Radiator
• phased.WidebandCollector
• phased.WidebandRadiator
• phased.RadarTarget
• phased.BackscatterRadarTarget
• phased.WidebandBackscatterRadarTarget
• phased.ArrayResponse
• phased.SteeringVector
This requirement differs from regular MATLAB usage where you can set EnablePolarization
property to false even when you use a polarization-enabled antenna. For example, this code uses
a polarized antenna, which requires that EnablePolarization property of the
phased.Radiator System object be set to true.
function [y] = codegen_radiator()
sSD = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[100e6,600e6],'AxisDirection','Y');
c = physconst('LightSpeed');
fc = 200e6;
lambda = c/fc;
d = lambda/2;
sURA = phased.URA('Element',sSD,...
'Size',[3,3],...
'ElementSpacing',[d,d]);
14-3
14 Code Generation
sRad = phased.Radiator('Sensor',sURA,...
'OperatingFrequency',150e6,...
'CombineRadiatedSignals',true,...
'EnablePolarization',true);
x = [1;2;1];
radiatingAngle = [10;0]; % One angle for one antenna
y = step(sRad,x,radiatingAngle,eye(3,3));
• Visualization methods for Phased Array System Toolbox System objects are not supported. These
methods are pattern, patternAzimuth, patternElevation, plot, plotResponse, and
viewArray.
• When a System object contains another System object as a property value, you must set the
contained System object in the constructor. You cannot use Object Property notation to set the
property. For example
is not.
• Code generation of Phased Array System Toolbox arrays that contain Antenna Toolbox antennas is
not supported.
• A list of the limitations on Phased Array System Toolbox functions and System objects is presented
here:
14-4
Code Generation
General Limitations
Code Generation has some general limitations not specifically related to the Phased Array System
Toolbox software. For a more complete discussion, see “System Objects in MATLAB Code Generation”
(MATLAB Coder).
• The data type and complexity (i.e., real or complex) of any input argument to a function or System
object must always remain the same.
• You cannot pass a System object to any method or function that you made extrinsic using
coder.extrinsic.
• You cannot load a MAT-file using coder.load when it contains a System object. For example, if
you construct a System object in the MATLAB environment and save it to a MAT-file
sSD = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[0.9e8,2e9],'AxisDirection','Y');
save x.mat sSD;
clear sSD;
then you cannot load the System object in your compiled MEX-file:
function codegen_load1()
W = coder.load('x.mat');
sSD = W.sSD;
The compilation
14-5
14 Code Generation
codegen codegen_load1
will produced an error message: 'Found unsupported class for variable using
function 'coder.load'. MATLAB class 'phased.ShortDipoleAntennaElement'
found at 'W.sSD' is unsupported.'
To avoid this problem, you can save the object's properties to a MAT-file, then, use coder.load to
load the object properties and re-create the object. For example, create and save a System
object’s properties in the MATLAB environment
sSD = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[0.9e8,2e9],'AxisDirection','Y');
FrequencyRange = sSD.FrequencyRange;
AxisDirection = sSD.AxisDirection;
save x.mat FrequencyRange AxisDirection;
Then, write a function codegen_load2 to load the properties and create a System object.
function codegen_load2()
W = coder.load('x.mat');
sSD = phased.ShortDipoleAntennaElement(...
'FrequencyRange',W.FrequencyRange,...
'AxisDirection',W.AxisDirection);
Then, issue the commands to create and execute the MEX-file, codegen_load2_mex.
codegen codegen_load2;
codegen_load2_mex
• System object properties are either tunable or nontunable. Unless otherwise specified, System
object properties are nontunable. Nontunable properties must be constant. A constant is a value
that can be evaluated at compile-time. You can change tunable properties even if the object is
locked. Refer to the object's reference page to determine whether an individual property is
tunable or not. If you try to set a nontunable System object property and the compiler determines
that it is not constant, you will get an error. For example, the phased.URA System object has a
nontunable property, ElementSpacing, which sets the distance between elements. You may want
to create an array that is tuned to a frequency. You cannot pass in the frequency as an input
argument because the frequency must be a constant.
fc = 200e6;
codegen codegen_const1 -args {fc}
the compiler responds that the value of the 'ElementSpacing' property, d, is not constant and
generates the error message: "Failed to compute constant value for nontunable
14-6
Code Generation
codegen codegen_const2
• You can assign a nontunable System object property value only once before a step method is
executed. This requirement differs from MATLAB usage where you can initialize these properties
multiple times before the step method is executed.
function codegen_property
sSD = phased.ShortDipoleAntennaElement(...
'FrequencyRange',[0.9e8,2e9],'AxisDirection','Y');
sURA = phased.URA('Element',sSD,...
'Size',[3,3],...
'ElementSpacing',[0.15,0.15]);
sURA.Size = [4,4];
codegen codegen_property
the following error message is produced: "A nontunable property may only be assigned
once."
• In certain cases, the compiler cannot determine the values of nontunable properties at compile
time or the code may not even compile. Consider the following example that reads in the x,y,z-
coordinates of a 5-element array from a file and then, creates a conformal array System object.
The text file, elempos.txt, contains the element coordinates
The file collectWave.m contains reads the element coordinates and creates the object.
function y = collectWave(angle)
elPos = calcElPos;
cArr = phased.ConformalArray('ElementPosition',elPos);
y = collectPlaneWave(cArr,randn(4,2),angle,1e8);
14-7
14 Code Generation
end
Attempting to compile
produces the error "Permissions 'r' and 'r+' are not supported".
The following example is a work-around that uses coder.extrinsic and coder.const to insure
that the value for the nontunable property, 'ElementPosition', is a compile time constant. The
function in the file, collectWave1.m, creates the object using the calcElPos function. This
function runs inside the MATLAB interpreter at compile time.
function y = collectWave1(angle)
coder.extrinsic('calcElPos')
elPos = coder.const(calcElPos);
cArr = phased.ConformalArray('ElementPosition',elPos);
y = collectPlaneWave(cArr,randn(4,2),angle,1e8);
end
The file calcElPos.m loads the element positions from the text file
Only the collectWave1.m file is compiled with codegen. Compiling and running
will succeed.
An alternate work-around uses coder.load to insure that the value of the nontunable property
'ElementPosition' is compile-time constant. In the MATLAB environment, run calcElPos2 to
save the array coordinates contained in elempos.txt to a MAT-file. Then, load the contents of the
MAT-file within the compiled code.
function calcElPos2
fid = fopen('elempos.txt');
el = textscan(fid, '%f');
fclose(fid);
elPos = reshape(el{1},[],3).';
save('positions', 'elPos');
end
14-8
Code Generation
The file collectWave2.m loads the coordinate positions and creates the conformal array object
function y = collectWave2(angle)
var = coder.load('positions');
cArr = phased.ConformalArray('ElementPosition',var.elPos);
y = collectPlaneWave(cArr,randn(4,2),angle,1e8);
end
Only the collectWave2.m file is compiled with codegen. Compiling and running
collectWave2.m
will succeed. This second approach is more general than the first since a MAT-file can contain any
variables, except System objects.
• The System object clone method is not supported.
14-9
14 Code Generation
Run codegen at the command line to generate the mex function, EstimateDOA_mex, and then run
the mex function:
codegen EstimateDOA.m
EstimateDOA_mex
10.0036 45.0030
The program contains a fixed value for the noise variance. If you wanted to reuse the same code for
different noise levels, you can pass the noise variance as an argument into the function. This is done
in the function EstimateDOA1.m, shown here, which has the input argument sigma.
function [az] = EstimateDOA1(sigma)
% Example:
% Estimate the DOAs of two signals received by a standard
14-10
Generate MEX Function to Estimate Directions of Arrival
Run codegen at the command line to generate the mex function, EstimateDOA1_mex, using the -
args option to specify the type of input argument. Then run the mex function with several different
input parameters:
Increasing the value of sigma degrades the estimates of the azimuth angles.
14-11
14 Code Generation
function plot_ULA_response
azangles = [-90:90];
elangles = zeros(size(azangles));
fc = 100e9;
c = physconst('LightSpeed');
N = size(azangles,2);
lambda = c/fc;
d = 0.4*lambda;
numelements = 11;
resp = zeros(1,N);
sIso = phased.IsotropicAntennaElement(...
'FrequencyRange',[1,200]*1e9,...
'BackBaffled',false);
sULA = phased.ULA('Element',sIso,...
'NumElements',numelements,...
'ElementSpacing',d,...
'Taper',taylorwin(numelements).');
for n = 1:N
x = get_ULA_response(sULA,fc,azangles(n),elangles(n));
resp(n) = abs(x);
end
plot(azangles,20*log10(resp));
title('ULA Response');
xlabel('Angle (deg)');
ylabel('Response (db)');
grid;
end
To create the code, run codegento create the mex-file plot_ULA_response_mex, and execute the
mex-file at the command line:
codegen plot_ULA_response
plot_ULA_response_mex;
14-12
Generate MEX Function Containing Persistent System Objects
14-13
14 Code Generation
14-14
Functions and System Objects Supported for C/C++ Code Generation
14-15
14 Code Generation
14-16
Functions and System Objects Supported for C/C++ Code Generation
14-17
14 Code Generation
14-18
Functions and System Objects Supported for C/C++ Code Generation
14-19
14 Code Generation
14-20
Functions and System Objects Supported for C/C++ Code Generation
14-21
15
Simulink Examples
15 Simulink Examples
model = 'AzEl2Broadside1';
open_system(model);
sim(model);
close_system(model);
15-2
Phase-Shift Beamforming of Plane Wave Signal
model = 'PhaseShiftBeamformer1Example';
open_system(model);
sim(model);
15-3
15 Simulink Examples
15-4
16
RF Propagation
16 RF Propagation
TIREM is designed to calculate the reference basic median propagation loss (path loss) based on the
terrain profile along the great circle path between two antennas for example using digital terrain
elevation data (DTED). You can use TIREM model to calculate the point-to-point path loss between
sites over irregular terrain. The model combines physics with empirical data to provide path loss
estimates. The TIREM propagation model can predict path loss at frequencies between 1 MHz and 1
THz.
Use tiremSetup to enable TIREM access from within MATLAB. The TIREM library folder contains
the tirem3 shared library. The full library name is platform dependent:
16-2
17
Featured Examples
17 Featured Examples
Depending on the application, practical phased antenna arrays sometimes use specially designed
antenna elements whose radiation pattern cannot be represented by a closed-form equation. Even
when the element pattern is well understood, as is the case with a dipole antenna, the mutual
coupling among the elements can significantly change the individual element pattern when the
element is put into an array. This makes the closed-form pattern less accurate. Therefore, for high
fidelity pattern analysis, you often need to use a custom radiation pattern obtained from
measurements or simulations.
There is no standard convention for the coordinate system used to specify the radiation pattern, so
the result from one simulation package often cannot not be directly used in another software
package. For example, in Phased Array System Toolbox™ (PST), the radiation pattern is expressed
using azimuth (az) and elevation (el) angles, as depicted in Figure 1. It is assumed that the main
beam of the antenna points toward azimuth and elevation, that is, the x-axis. The value of az lies
between and and the value of el lies between and . See “Spherical Coordinates”
on page 10-10.
17-2
Antenna Array Analysis with Custom Radiation Pattern
Figure 1: Spherical coordinate system convention used in Phased Array System Toolbox™.
A frequently-used full-wave modeling tool for simulating antenna radiation patterns is HFSS™. In this
tool, individual elements are modeled as if they were part of an infinite array. The simulated radiation
pattern is represented as an M-by-3 matrix where the first column represents the azimuth angle ,
the second column represents the elevation angle , and the third column represents the radiation
pattern in dB. The coordinate system and the definitions of and used in HFSS is shown in Figure
2. In this convention, the main beam of the antenna points along the z-axis which usually points
vertically. The value of is between and and the value of is between and .
17-3
17 Featured Examples
Note that the HFSS coordinate system is not exactly the same as the coordinate system used in
Phased Array System Toolbox™. In HFSS, the beam mainlobe points along the z-axis and the plane
orthogonal to the beam is formed from the x- and y axes. One possible approach to import a custom
pattern in convention without any coordinate axes rotation is shown below.
17-4
Antenna Array Analysis with Custom Radiation Pattern
For example, a cardioid-shaped antenna pattern is simulated in the - convention and is saved in
a .csv file. The helper function helperPatternImport reads the .csv file and reformats its
contents into a two-dimensional matrix in and .
[pattern_phitheta,phi,theta] = helperPatternImport;
The phi-theta pattern can now be used to form a custom antenna element. Assume that this antenna
operates between 1 and 1.25 GHz.
To verify that the pattern has been correctly imported, plot the response of the custom antenna
element. Notice that the main beam points to azimuth and elevation, the custom pattern with
main beam along z-axis is imported without any rotation.
fmax = freqVector(end);
pattern(antenna,fmax,'Type','powerdb')
17-5
17 Featured Examples
Consider a 100-element antenna array whose elements reside on a 10-by-10 rectangular grid, as
shown in Figure 3. To ensure that no grating lobes appear, elements are spaced at one-half
wavelength at the highest operating frequency. This rectangular array can be created using the
following commands.
c = physconst('LightSpeed');
lambda = c/fmax;
array = phased.URA('Element',antenna,'Size',10,'ElementSpacing',lambda/2)
array =
The total radiation pattern of the resulting antenna array is plotted below in u-v space. The pattern is
a combination of both the element pattern and the array factor.
pattern(array,fmax,'PropagationSpeed',c,'Type','powerdb',...
'CoordinateSystem','UV');
17-6
Antenna Array Analysis with Custom Radiation Pattern
One can also easily examine the u-cut of the pattern as shown below.
pattern(array,fmax,-1:0.01:1,0,'PropagationSpeed',c, ...
'CoordinateSystem','UV','Type','powerdb')
axis([-1 1 -50 0]);
17-7
17 Featured Examples
This section illustrates the idea of phase steering an array. An advantage of phased arrays over a
single antenna element is that the main beam can be electronically steered to a given direction.
Steering is accomplished by adjusting the weights assigned to each element. The set of weights is
also called the steering vector. Each weight is a complex number whose magnitude controls the
sidelobe characteristics of the array and whose phase steers the beam.
The example scans the main beam of the array from azimuth to azimuth, with the elevation
angle fixed at during the scan.
helperPatternScan(array)
17-8
Antenna Array Analysis with Custom Radiation Pattern
clear helperPatternScan
When a radar is deployed in the field, its radiation pattern is modified by the surrounding
environment. For example, reflections from the earth may enforce or attenuate the signal arriving at
the target via the direct path. In addition, the refraction from the ionosphere can also introduce
another path from the top. The resulting pattern in the elevation direction is often quite complicated
and a radar engineer often needs to do a rough estimate of the vertical coverage during the system
design stage. This section shows how to estimate the radar vertical diagram, also referred to as Blake
chart, if the array is deployed at a height of 20 meters and covers a free space range of 100 km.
el_ang = -90:90;
arrayresp = phased.ArrayResponse('SensorArray',array, ...
'PropagationSpeed',c);
el_pat = abs(arrayresp(fmax,el_ang)); % elevation pattern
freespace_rng = 100; % in km
ant_height = 20; % in m
radarvcd(fmax,freespace_rng,ant_height,...
'HeightUnit','m','RangeUnit','km',...
'AntennaPattern',el_pat/max(el_pat),'PatternAngles',el_ang.');
17-9
17 Featured Examples
The resulting diagram contains a curve in space along which the return signal level is a constant. It
clearly shows that the main beam is severely modified by the reflection. For example, at a range of
100 km, the return from a target at an elevation angle of is much smaller compared to a target at
the same range at a nearby elevation angle. The curve also show that for certain angles, a target can
be detected at as far as 200 km. This is the case when the reflected signal and the direct path signal
are in phase, thus resulting in an enhanced return.
Summary
This example shows how to construct and analyze an antenna array using a custom antenna pattern.
The pattern can be generated using full-wave modeling simulation software with the - convention.
The pattern can then be used to form a custom antenna element. The resulting array is used to
generate a vertical coverage diagram and is also scanned from to in the azimuth direction to
illustrate the phase steering concept.
17-10
Array Pattern Synthesis
In phased array design applications, it is often necessary to find a way to taper element responses so
that the resulting array pattern satisfies certain performance criteria. Typical performance criteria
include the mainlobe location, null location(s) and sidelobe levels.
A common requirement when synthesizing beam patterns is pointing a null towards a given arrival
direction. This helps suppress interference from that direction and improves the signal-to-
interference ratio. The interference is not always malicious- an airport radar system may need to
suppress interference from a nearby radio station. In this case, the position of the radio station is
known and a sidelobe cancellation algorithm can be used to remove the interference.
Sidelobe cancellation is useful for suppressing interference that enters through the array's sidelobes.
In this case, because the interference direction is known, the algorithm is simple. Form a beam that
points towards the interference direction, then scale the beam weights and subtract scaled weights
from the weights for the beam patterns that point towards any other look direction. This process
always places a strong null in the interference direction.
The following example shows how to design the weights of the radar so that it scans between -30 and
30 degrees yet always keeps a null at 40 degrees. Assume that the radar uses a 10-element ULA that
is parallel to the ground and that the known radio interference arrives from 40 degrees azimuth.
c = 3e8; % signal propagation speed
fc = 1e9; % signal carrier frequency
lambda = c/fc; % wavelength
ula = phased.ULA(10,lambda/2);
ula.Element.BackBaffled = true;
17-11
17 Featured Examples
The figure above shows the resulting beam patterns for look directions from -30 degrees azimuth to
30 degrees azimuth, in 5 degrees increment. It is clear from the zoomed figure below that no matter
where the look direction is, the radar beam pattern has a strong null at the interference direction.
% Zoom
xlim([30 50])
legend(arrayfun(@(k)sprintf('%d degrees',k),thetaad,...
'UniformOutput',false),'Location','SouthEast');
17-12
Array Pattern Synthesis
Another frequent problem when designing a phased array is matching a desired beam pattern to a
specification that is handed to you. Often, the requirements are expressed in terms of beamwidth and
sidelobe level.
The following example illustrates these four steps. First, observe the desired pattern shown in the
following figure.
load desiredSynthesizedAntenna;
clf;
pattern(mysteryAntenna,fc,'CoordinateSystem','polar','Type','powerdb');
view(50,20);
ax = gca;
ax.Position = [-0.15 0.1 0.9 0.8];
camva(4.5);
campos([520 -250 200]);
17-13
17 Featured Examples
The 3D radiation patterns exhibits some symmetries in both azimuth and elevation cuts. Therefore,
the pattern may be best obtained using a uniform rectangular array (URA). It is also clear from the
plot that there is no energy radiated toward back of the array.
Next, determine the size of the array. To avoid grating lobes, the element spacing is set to half
wavelength. For a URA, the sizes along the azimuth and elevation directions can be derived from the
required beamwidths along azimuth and elevation directions, respectively. In the case of half
wavelength spacing, the number of elements along a certain direction can be approximated by
where is the beamwidth along that direction. Hence, the aperture size of the URA can be computed
as
[azpat,elpat,az,el] = helperExtractSynthesisPattern(mysteryAntenna,fc,c);
% Azimuth direction
idx = find(azpat>pow2db(1/2));
azco = [az(idx(1)) az(idx(end))]; % azimuth cutoff
N_col = round(2/sind(diff(azco)))
% Elevation direction
idx = find(elpat>pow2db(1/2));
elco = [el(idx(1)) el(idx(end))]; % elevation cutoff
N_row = round(2/sind(diff(elco)))
17-14
Array Pattern Synthesis
N_col =
19
N_row =
14
helperArraySynthesisComparison(ura,mysteryAntenna,fc,c)
The figure shows that the synthesized array exceeds the beamwidth requirement of the desired
pattern. However, the sidelobes are much larger than the desired pattern. You can reduce the
sidelobes by applying a windowing operation to the array. Because the URA can be considered to be
the combination of two separable uniform linear arrays (ULA), the window can be designed
independently along both the azimuth and elevation directions using familiar filter design methods.
The code below shows how to obtain the windows for azimuth and elevation directions.
17-15
17 Featured Examples
AzSidelobe = 20;
Ap = 0.1; % Passband ripples
AzWeights = designfilt('lowpassfir','FilterOrder',N_col-1,...
'CutoffFrequency',azco(2)/90,'PassbandRipple',0.1,...
'StopBandAttenuation',AzSidelobe);
azw = AzWeights.Coefficients;
ElSidelobe = 30;
ElWeights = designfilt('lowpassfir','FilterOrder',N_row-1,...
'CutoffFrequency',elco(2)/90,'PassbandRipple',0.1,...
'StopBandAttenuation',ElSidelobe);
elw = ElWeights.Coefficients;
The figure shows that the resulting sidelobe level is lower compared to the previous design but still
does not satisfy the requirement. By some trials and errors, the following parameters are used to
create the final design:
17-16
Array Pattern Synthesis
AzWeights = designfilt('lowpassfir','FilterOrder',N_col-1,...
'CutoffFrequency',azco(2)/90,'PassbandRipple',0.1,...
'StopBandAttenuation',AzSidelobe);
azw = AzWeights.Coefficients;
ElWeights = designfilt('lowpassfir','FilterOrder',N_row-1,...
'CutoffFrequency',elco(2)/90,'PassbandRipple',0.1,...
'StopBandAttenuation',ElSidelobe);
elw = ElWeights.Coefficients;
ura.Taper = elw(:)*azw(:).';
helperArraySynthesisComparison(ura,mysteryAntenna,fc,c)
The figure shows that the beamwidth and sidelobe levels of the synthesized pattern match the desired
specifications. The following figures show the desired 3D pattern, the synthesized 3D pattern, the
resulting array geometry, and the taper.
helperArraySynthesisComparison(ura,mysteryAntenna,fc,c,'3d')
17-17
17 Featured Examples
Many array synthesis problems can be treated as optimization problems, especially for arrays with
large apertures or complex geometries. In those situations, a closed form solution often does not exist
and the solution space is very large. For example, for a large array, it is often necessary to thin the
array to control the sidelobe levels to avoid wasting power delivered to each antenna element. In this
case, an element can be turned on or off. If you were to try all possible solutions in a 400-element
array, you would need to try combinations, which is unrealistic, and a 400-element array is not
considered to be a big aperture at all. Optimization techniques are often adopted in this situation.
A frequently used optimization technique is the genetic algorithm. A genetic algorithm achieves the
optimal solution by simulating the natural selection process. It starts with randomly selected
candidates as the first generation. At each evolution cycle, the algorithm sorts the generation
according to a predetermined performance measure (in the thinned array example, the performance
measure would be the ratio of peak-to-sidelobe level), and then discards the ones with lower
performance scores. The algorithm then mutates the remaining candidates to generate a newer
generation and repeats the process, until it reaches a stop condition, such as the maximum number of
generations.
The following example shows how to use a genetic algorithm to thin a 40x40 URA. The goal is to
achieve maximum sidelobe suppression in both azimuth and elevation cut. The beam pattern of the
full array is shown first.
Nside = 40;
geneticArray = phased.URA(Nside,lambda/2);
geneticArray.Element.BackBaffled = true;
17-18
Array Pattern Synthesis
clf;
wplot = helperThinnedArrayComparison(geneticArray,fc,c);
sllopt =
13.2981
Now apply the genetic algorithm. Notice that the URA has symmetry in both rows and columns, thus
one can take advantage of this symmetry so that each thinning coefficients candidate applies to only
quarter of the array. This reduces the search space of the algorithm.
17-19
17 Featured Examples
% Pick one candidate, plot the beam pattern, and compute the sidelobe
% level
wtemp = w0(:,:,100);
wo = [fliplr(wtemp) wtemp;rot90(wtemp,2) flipud(wtemp)];
wplot = helperThinnedArrayComparison(geneticArray,fc,c,[wplot wo(:)],...
{'Full','Initial'});
The figure shows the beam pattern resulted from one typical first generation candidate. The sidelobe
level is lower in azimuth direction but higher in elevation direction compared to the full array. The
exact sidelobe level and the fill rate of the array can be computed as
[azpat,elpat] = helperExtractSynthesisPattern(geneticArray,fc,c,wo(:));
fillrate = sum(wo(:))/Nside^2*100
17-20
Array Pattern Synthesis
sllopt =
8.7013
fillrate =
71.7500
This means that 71.75% of the array elements (1148 of them) are active and the sidelobe level is
about 9 dB. It needs to be suppressed further. The code below applies genetic algorithm with 30
generations.
w = w0;
pos = getElementPosition(geneticArray)/lambda;
angspan = -90:90;
for m = 1:Niter
% Compute the beam pattern for the entire generation
[azpat,elpat] = helperArraySynthesisBeamPattern(pos,angspan,w);
rng(prvS);
17-21
17 Featured Examples
sllopt = sll(idx(1))
fillrate = sum(wo(:))/Nside^2*100
sllopt =
17.5380
fillrate =
76.5000
The figure shows the resulting beam pattern. It can be seen that the sidelobe level has been further
improved to about 17.5 dB with a fill rate of 76.5% (1224 active elements). Compared to the first
generation candidate, it uses 5% more active elements while achieving an additional 9 dB sidelobe
suppression. Compared to the full array, the resulting thinned array can save the cost of
implementing T/R switches behind dummy elements, which in turn leads to a roughly 25% saving on
the consumed power. Also note that even though the thinned array uses less elements, the beamwidth
is close to what could be achieved with a full array.
The final thinned array is shown below with black circles represents the dummy elements.
17-22
Array Pattern Synthesis
clf;
geneticArray.Taper = wo;
viewArray(geneticArray,'ShowTaper',true);
It is worth noting that the genetic algorithm does not always land on the same solution in each trial.
However, in general the resulting beam patterns share a similar sidelobe level.
The script above shows a very simple genetic algorithm applied to the array synthesis problem. In
real applications, the genetic algorithm is likely to be more complex. There are also other
optimization algorithms used in array synthesis, such as the simulated annealing algorithm.
Interested readers can find both genetic algorithm and simulated annealing algorithm solvers in the
Global Optimization Toolbox™.
Summary
This example shows several approaches to perform array synthesis on a phased array. In practice,
one needs to choose the appropriate synthesis method according to the specific constraint of the
application, such as the size of the array aperture, the shape of the array geometry, etc.
Reference
[1] Randy L. Haupt, Thinned Arrays Using Genetic Algorithms, IEEE Transactions on Antennas and
Propagation, Vol 42, No 7, 1994
[2] Randy L. Haupt, An Introduction to Genetic Algorithms for Electromagnetics, IEEE antennas and
Propagation Magazine, Vol 37, No 2, 1995
17-23
17 Featured Examples
17-24
Modeling and Analyzing Polarization
The electromagnetic field generated by an antenna is orthogonal to the propagation direction in the
far field. The field can be pointing to any direction in this plane, and therefore can be decomposed
into two orthogonal components. Theoretically, there are an infinite number of ways to define these
two components, but most often, one uses either (H,V) set or (L,R) set. (H,V) stands for horizontal and
vertical, which can be easily pictured as x and y component; while (L,R) stands for left and right
circular. It may be difficult to imagine that a vector in space can have a circular component in it, the
secret lies in the fact that each component can be a complex number, which greatly increases the
complexity of the trace of such a vector.
Let's look at several simple examples. The time varying field can be written as
where
are the two components in phasor representation. and are the unit vector of h and v axes,
respectively.
The simplest case is probably a linear polarization, which happens when the two components are
always in phase. Assume
the field can be represented by a vector of [1;1]. The polarization for such a field looks like
fv = [1;1];
helperPolarizationView(fv)
17-25
17 Featured Examples
From the figure, it is clear that the combined polarization is along the 45 degrees diagonal.
The plot in the upper right portion of the figure is often referred to as the polarization ellipse. It is the
projection of the combined field trace on the H-V plane. The polarization ellipse is often characterized
by two angles, the tilt angle (also known as orientation angle) and the ellipticity angle . In this
case, the tilt angle is 45 degrees and the ellipticity angle is 0. The dot on the ellipse shows how the
combined field moves along the trace on the H-V plane while time passes.
A polarized field can also be represented by Stokes vector, which is a length-4 vector. The
corresponding Stokes vector of the linear polarization, [1;1], is given by
s = stokes(fv)
s =
2
0
2
0
Note that all 4 entries in the vector are real numbers. In fact, all these entries are measurable. In
addition, it can be shown that the four quantities always satisfy the following equation
17-26
Modeling and Analyzing Polarization
Therefore, each set of Stokes can be considered as a point on a sphere. Such a sphere is referred to
as a Poincare sphere. The Poincare sphere for the above field is shown in the bottom right portion of
the figure.
fv = [1;1i];
helperPolarizationView(fv)
The figure shows that the trace of the combined field is a circle. Both the polarization ellipse and the
Poincare sphere shows that the field is left circularly polarized.
fv = [2+1i;1-1i];
helperPolarizationView(fv)
17-27
17 Featured Examples
Polarization of an Antenna
The polarization of an antenna is defined as the polarization of the field transmitted by the antenna
regardless whether it's in the transmitting or receiving mode. However, as mentioned earlier, the
polarization is defined in the plane that is orthogonal to the propagation direction. Therefore, it is
defined in the local coordinate system of each propagation direction, as shown in the following
diagram.
17-28
Modeling and Analyzing Polarization
Some antennas have a structure that determines its polarization, such as a dipole. The dipole antenna
has a polarization that is parallel to its orientation. Assuming the frequency is 300 MHz, for a vertical
short dipole, the polarization response at boresight, i.e., 0 degrees azimuth and 0 degrees elevation,
is given by
antenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
fc = 3e8;
resp = antenna(fc,[0;0])
resp =
H: 0
V: -1.2247
Note that the horizontal component is 0. If we change the orientation of the dipole antenna to
horizontal, the vertical component becomes 0.
antenna = phased.ShortDipoleAntennaElement('AxisDirection','Y');
resp = antenna(fc,[0;0])
resp =
H: -1.2247
V: 0
Polarization Loss
When two antennas form a transmit/receive pair, their polarizations could affect the received signal
power. Therefore, to collect a signal with maximum power possible, the receive antenna's polarization
has to match the transmit antenna's polarization. The polarization matching factor can be measured
as
where and represent the normalized polarization states of the transmit and receive antenna,
respectively.
Assume both transmit and receive antennas are short dipoles. The transmit antenna sits at the origin
and the receive antenna at location (100,0,0). First, consider the case where both antennas are along
Y axis and face each other. This is the scenario where the two antennas are matched in polarization.
pos_r = [100;0;0];
lclaxes_t = azelaxes(0,0); % transmitter coordinate system
lclaxes_r = azelaxes(180,0); % receiver faces transmitter
ang_t = [0;0]; % receiver at transmitter's boresight
ang_r = [0;0]; % transmitter at receiver's boresight
txAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
17-29
17 Featured Examples
resp_t = txAntenna(fc,ang_t);
rxAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
resp_r = rxAntenna(fc,ang_r);
ploss = polloss([resp_t.H;resp_t.V],[resp_r.H;resp_r.V],pos_r,lclaxes_r)
ploss =
The loss is 0dB, indicating that there is no loss due to polarization mismatch. The section below
shows the effect with a simulated signal.
% Signal simulation
[x,t] = helperPolarizationSignal;
helperPolarizationSignalPlot(t,x,y,'vertically')
17-30
Modeling and Analyzing Polarization
The figure shows that the signal is received with no loss. Each short dipole antenna provides a gain of
1.76 dB, so the received signal is 1.5 times stronger than the transmitted signal.
If instead a horizontally polarized antenna is used to receive the signal, the two antennas are now
orthogonal in polarization and as a result, no power will be delivered to the received antenna. The
polarization loss can be found by
rxAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Y');
resp_r = rxAntenna(fc,ang_r);
ploss = polloss([resp_t.H;resp_t.V],[resp_r.H;resp_r.V],pos_r,lclaxes_r)
ploss =
Inf
17-31
17 Featured Examples
As the diagram shows, the polarization of an antenna can be seen as a filter blocking out any
polarized wave that is orthogonal to the antenna's own polarization state.
collector = ...
phased.Collector('Sensor',rxAntenna,'Polarization','Combined',...
'OperatingFrequency',fc,'PropagationSpeed',3e8);
helperPolarizationSignalPlot(t,x,y,'horizontally')
17-32
Modeling and Analyzing Polarization
One can rotate the receive antenna to get a partial match in polarization. For instance, assume the
receiving antenna in the previous example is rotated 45 degrees around x axis, then the received
signal is no longer 0, although not as strong as when the polarizations are matched.
% Rotate axes
lclaxes_r = rotx(45)*azelaxes(180,0);
helperPolarizationSignalPlot(t,x,y,'45 degree')
ploss = polloss([resp_t.H;resp_t.V],[resp_r.H;resp_r.V],pos_r,lclaxes_r)
% measured in dB.
ploss =
3.0103
17-33
17 Featured Examples
When an electromagnetic wave hits the target, the wave will be scattered off the target and some
energy will be transferred between two orthogonal polarization components. Therefore, the target
scattering mechanism is often modeled by a 2x2 radar cross section (RCS) matrix (also known as
scattering matrix), whose diagonal terms specify how the target scatters the energy into the original
H and V polarization component and off diagonal terms specify how the target scatters the energy
into the opposite polarization component.
Because the transmit and receive antennas can have any combination of polarizations, it is often of
interest to look at the polarization signature for a target for different polarization configurations. The
signature plots the received power under different polarizations as a function of the tilt angle and the
ellipticity angle of the transmit polarization ellipse. This can also be seen as a measure of the
effective RCS. Two most widely used polarization signatures (also known as polarization responses),
are co-polarization (co-pol) response and cross polarization (cross-pol) response. Co-pol response
uses the same polarization for both transmit and receive while the cross-pol response uses the
orthogonal polarization to receive.
The simplest target is a sphere, whose RCS matrix is given by [1 0;0 1], meaning that the reflected
polarization is the same as the incident polarization. The polarization signature for a sphere is given
by
s = eye(2);
subplot(211); polsignature(s,'c');
subplot(212); polsignature(s,'x');
17-34
Modeling and Analyzing Polarization
From the plot, it can be seen that for such a target, a linear polarization, where the ellipticity angle is
0, generates the maximum return in a co-pol setting while a circular polarization, where the ellipticity
angle is either 45 or -45 degrees, generates the maximum return in a cross-pol configuration.
A more complicated target is a dihedral, which is essentially a corner that reflects the wave twice, as
shown in left side of the following sketch:
The right side of the above figure shows how the polarization field changes along the two reflections.
After the two reflections, the horizontal polarization component remains the same while the vertical
polarization component is reversed. Hence, its cross section matrix and polarization signature are
give by
s = [1 0;0 -1];
subplot(211); polsignature(s,'c')
subplot(212); polsignature(s,'x')
17-35
17 Featured Examples
The signature shows that the circular polarization works best in a co-pol setting while 45-degree
linear polarization works best in a cross-pol situation.
Putting everything together, the polarized signal is first transmitted by an antenna, then bounces off a
target, and finally gets received at the receive antenna. Next is a simulation for this signal flow.
The simulation assumes a vertical dipole as the transmit antenna, a horizontal dipole as the receive
antenna, and a target whose RCS matrix is [0 1;1 0], which flips the signal's polarization. For the
illustration purpose, the propagation in free space is ignored because it does not affect the
polarization. It is also assumed that the transmit antenna, the target, and the receive antenna are on
a line along the transmit antenna's boresight. The local coordinate system is the same for the
transmit antenna and the target. The receive antenna is facing the transmit antenna.
% Define transmit and antenna
txAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
rxAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Y');
radiator = phased.Radiator('Sensor',txAntenna,'Polarization','Combined');
collector = phased.Collector('Sensor',rxAntenna,'Polarization','Combined');
% Simulate signal
[x,t] = helperPolarizationSignal;
17-36
Modeling and Analyzing Polarization
ang_tgt_out = [0;0];
ang_rx = [0;0];
% Define target
target = phased.RadarTarget('EnablePolarization',true,...
'Mode','Bistatic','ScatteringMatrix',[0 1;1 0]);
helperPolarizationSignalPlot(t,x,y,'horizontally');
Note that because the target flips the polarization components, the horizontally polarized antenna is
able to receive the signal sent with a vertically polarized antenna.
Summary
This example reviews the basic concepts of polarization and introduces how to analyze and model
polarized antennas and targets using Phased Array System Toolbox.
17-37
17 Featured Examples
Unfortunately it is often very difficult to model the exact mutual coupling effect among elements. This
example shows one possible approach to model the mutual coupling effects via an embedded pattern,
which refers to the pattern of a single element embedded in a finite array. The element of choice is in
general at the center of the array. The embedded pattern is calculated or measured by transmitting
through the element itself while terminating all other elements in the array with a reference
impedance [1]-[3]. This approach works well when the array is large so the edge effects may be
ignored.
The example models two arrays: first using the pattern of the isolated element, second with the
embedded element pattern and compare the results of the two with the full-wave Method of Moments
(MoM) based solution of the array. The array performance for scanning at broadside, and for
scanning off broadside is established. Finally, the array spacing is adjusted to investigate the
occurrence of scan blindness and compare against reference results [3].
First, we design an array with isolated element. For this example we choose the center of the X-band
as our design frequency.
freq = 10e9;
vp = physconst('lightspeed');
lambda = vp/freq;
In [4], it was discussed that the central element of a 5 X 5 array, where is the wavelength, starts
to behave like it is in an infinite array. Such an aperture would correspond to a 10 X 10 array of half-
wavelength spaced radiators. We choose to slightly exceed this limit and consider a 11 X 11 array of
dipoles.
Nrow = 11;
Ncol = 11;
drow = 0.5*lambda;
dcol = 0.5*lambda;
The dipole of choice has a length slightly lower than and a radius of approximately .
mydipole = dipole;
mydipole.Length = 0.47*lambda;
mydipole.Width = cylinder2strip(0.191e-3);
figure('Color','w');
show(mydipole);
17-38
Modeling Mutual Coupling in Large Arrays Using Embedded Element Pattern
Now creates an 11 X 11 URA and assign the isolated dipole as its element. Adjust the element
spacing to be half-wavelength at 10 GHz. The dipole tilt is set to zero so its orientation matches the
array geometry in the Y-Z plane.
isolatedURA = phased.URA;
isolatedURA.Element = mydipole;
isolatedURA.Size = [Nrow Ncol];
isolatedURA.ElementSpacing = [drow dcol];
viewArray(isolatedURA);
myFigure = gcf;
myFigure.Color = 'w';
17-39
17 Featured Examples
To compute the embedded pattern of the center dipole element, we first create a full-wave model of
the previous array. Since the default orientation of the dipole element in the library is along z-axis, we
tilt it so that the array is formed in the X-Y plane.
fullWaveArray = rectangularArray(...
'Size',[Nrow Ncol],...
'RowSpacing',drow,...
'ColumnSpacing',dcol);
fullWaveArray.Element = mydipole;
fullWaveArray.Element.Tilt = 90;
fullWaveArray.Element.TiltAxis = [0 1 0];
show(fullWaveArray)
title('Rectangular 11 X 11 Array of Dipole Antennas')
17-40
Modeling Mutual Coupling in Large Arrays Using Embedded Element Pattern
To calculate the embedded element pattern, use the pattern function and pass in additional input
parameters of the element number(index of the center element) and termination resistance. The scan
resistance and scan reactance for an infinite array of resonant dipoles spaced apart is provided in
[3] and we choose the resistance at broadside as the termination for all elements.
Zinf = 76 + 1i*31;
ElemCenter = (prod(fullWaveArray.Size)-1)/2 + 1;
az = -180:2:180;
el = -90:2:90;
EmbElFieldPatCenter = pattern(fullWaveArray,freq,az,el,...
'ElementNumber',ElemCenter,'Termination',real(Zinf),'Type','efield');
Import this embedded element pattern into a custom antenna element and create the same
rectangular array using that element. Since the array will be in the Y-Z plane, rotate the pattern to
match the scan planes.
embpattern = helperRotatePattern(az,el,EmbElFieldPatCenter,[0 1 0],90);
embpattern = mag2db(embpattern);
fmin = freq - 0.1*freq;
fmax = freq + 0.1*freq;
freqVector = [fmin fmax];
embantenna = phased.CustomAntennaElement('FrequencyVector',freqVector,...
'AzimuthAngles',az,'ElevationAngles',el,...
'MagnitudePattern',embpattern,'PhasePattern',zeros(size(embpattern)));
embeddedURA = phased.URA;
embeddedURA.Element = embantenna;
17-41
17 Featured Examples
Next, calculate and compare the patterns in different planes for the three arrays : the one using the
isolated element pattern, the one using the embedded element pattern, and the full-wave model (used
as the ground truth).
First, the pattern in the elevation plane (specified by azimuth = 0 deg and also called the E-plane)
Eplane_embedded = pattern(embeddedURA,freq,0,el);
Eplane_isolated = pattern(isolatedURA,freq,0,el);
[Eplane_fullwave,~,el3e] = pattern(fullWaveArray,freq,0,0:1:180);
el3e = el3e'-90;
Now, the pattern in the azimuth plane (specified by elevation = 0 deg and called the H-plane).
Hplane_embedded = pattern(embeddedURA,freq,az/2,0);
Hplane_isolated = pattern(isolatedURA,freq,az/2,0);
Hplane_fullwave = pattern(fullWaveArray,freq,90,0:1:180);
17-42
Modeling Mutual Coupling in Large Arrays Using Embedded Element Pattern
The array directivity is approximately 23 dBi. This result is close to the theoretical calculation for the
peak directivity [5] after taking into account the absence of a reflector, D = 4 / ,
.
The pattern comparison suggests that the main beam and the first sidelobes are aligned for all three
cases. Moving away from the main beam shows the increasing effect of coupling on the sidelobe level.
As expected,the embedded element pattern approach suggests a coupling level in between the full-
wave simulation model and the isolated element pattern approach.
The behavior of the array pattern is intimately linked to the embedded element pattern. To
understand how our choice of a 11 X 11 array impacts the center element behavior, we increase the
array size to a 25 X 25 array (12.5 X 12.5 aperture size). Note that the triangular mesh size for
the full wave Method of Moments (MoM) analysis with 625 elements increases to 25000 triangles (40
triangles per dipole) and the computation for the embedded element pattern takes approximately 12
minutes on a 2.4 GHz machine with 32 GB memory. This time can be reduced by lowering the mesh
size per element by meshing manually using a maximum edge length of .
17-43
17 Featured Examples
load atexdipolearray
embpattern = helperRotatePattern(...
DipoleArrayPatData.AzAngles,DipoleArrayPatData.ElAngles,...
DipoleArrayPatData.ElemPat(:,:,3),[0 1 0],90);
embpattern = mag2db(embpattern);
embantenna2 = clone(embantenna);
embantenna2.AzimuthAngles = DipoleArrayPatData.AzAngles;
embantenna2.ElevationAngles = DipoleArrayPatData.ElAngles;
embantenna2.MagnitudePattern = embpattern;
embantenna2.PhasePattern = zeros(size(embpattern));
Eplane_embedded = pattern(embantenna2,freq,0,el);
Eplane_embedded = Eplane_embedded - max(Eplane_embedded); % normalize
Eplane_isolated = pattern(mydipole,freq,0,el);
Eplane_isolated = Eplane_isolated - max(Eplane_isolated); % normalize
embpatE = pattern(embantenna,freq,0,el);
embpatE = embpatE-max(embpatE); % normalize
17-44
Modeling Mutual Coupling in Large Arrays Using Embedded Element Pattern
Hplane_embedded = pattern(embantenna2,freq,0,az/2);
Hplane_embedded = Hplane_embedded - max(Hplane_embedded); % normalize
Hplane_isolated = pattern(mydipole,freq,0,az/2);
Hplane_isolated = Hplane_isolated - max(Hplane_isolated); % normalize
embpatH = pattern(embantenna,freq,az/2,0);
embpatH = embpatH-max(embpatH); % normalize
The plot above reveals that the difference between embedded element patterns of the 11 X 11 and
the 25 X 25 array, respectively, is less than 0.5 dB, in the E-plane. However, the H-plane shows more
variation for the 11 X 11 array as compared with the 25 X 25 array.
This section scans the array based on the embedded element pattern in the elevation plane defined
by azimuth = 0 deg and plot the normalized directivity. In addition, the normalized embedded
element pattern is also plotted. Note the overall shape of the normalized array pattern approximately
follows the normalized embedded element pattern, just as predicted by the pattern multiplication
principle.
17-45
17 Featured Examples
eplane_indx = find(az==0);
scan_el1 = -30:10:30;
scan_az1 = zeros(1,numel(scan_el1));
scanEplane = [scan_az1;scan_el1];
% array scanning
legend_string1 = cell(1,numel(scan_el1));
scanEPat = nan(numel(el),numel(scan_el1));
for i = 1:numel(scan_el1)
scanEPat(:,i) = pattern(embeddedURA,freq,scan_az1(i),el,...
'Weights',weights(:,i));
legend_string1{i} = strcat('scan = ',num2str(scan_el1(i)));
end
scanEPat = scanEPat - max(max(scanEPat)); % normalize
helperATXPatternCompare(el(:),scanEPat,...
'Elevation Angle (deg.)','Directivity (dBi)',...
'E-plane Scan Comparison',legend_string1(1:end-1),[-50 5]);
hold on;
plot(el(:),embpatE,'-.','LineWidth',1.5);
legend([legend_string1,{'Embedded element'}],'location','best')
hold off;
17-46
Modeling Mutual Coupling in Large Arrays Using Embedded Element Pattern
Scan Blindness
In large arrays, the array directivity can reduce drastically at certain scan angles under certain
situations. At these scan angles, referred to as the blind angles, the array does not radiate the power
supplied at its input terminals [3]. Two common mechanisms under which blindness conditions occur
are
It is possible to detect scan blindness in large finite arrays by studying the embedded element pattern
(also known as array element pattern in the infinite array analysis). The array being investigated in
this example does not have a dielectric substrate/ground plane, and therefore the surface waves are
eliminated. However we can investigate the second mechanism, i.e. the grating lobe excitation. To do
so, let us increase the spacing across rows and columns of the array to be 0.7 . Since this spacing is
greater than the half-wavelength limit we should expect grating lobes in the visible space beyond a
specific scan angle. As pointed out in [3], to accurately predict the depth of grating lobe blind angles
in the finite array of dipoles, we need to have an array of the size 41 X 41 or higher. We will compare
3 cases, namely the 11 X 11, 25 X 25 and the 41 X 41 size arrays and check if the existence of blind
angles can at least be observed in the 11 X 11 array. As mentioned earlier, the results were
precomputed in Antenna Toolbox™ and saved in a MAT file. To reduce the computational time, the
elements were meshed with maximum edge length of .
load atexdipolearrayblindness.mat
The normalized E-plane embedded element pattern for arrays of three sizes
17-47
17 Featured Examples
The normalized H-plane embedded element pattern for arrays of three sizes. Notice the
blind angle around -62 and -64 deg.
Conclusion
The embedded element pattern approach is one possible way to perform the analysis of large finite
arrays. They need to be large enough so that the edge effects can be ignored. The approach replaces
the isolated element pattern with the embedded element pattern since the latter includes the effect of
mutual coupling.
Reference
[1] R. J. Mailloux, 'Phased Array Antenna Handbook', Artech House,2nd edition, 2005
[2] W. Stutzman, G. Thiele, 'Antenna Theory and Design', John Wiley & Sons Inc., 3rd Edition, 2013.
[3] R. C. Hansen, Phased Array Antennas, Chapter 7 and 8, John Wiley & Sons Inc.,2nd Edition, 1998.
[4] H. Holter, H. Steyskal, "On the size requirement for finite phased-array models," IEEE
Transactions on Antennas and Propagation, vol.50, no.6, pp.836-840, Jun 2002.
[5] P. W. Hannan, "The Element-Gain Paradox for a Phased-Array Antenna," IEEE Transactions on
Antennas Propagation, vol. 12, no. 4, July 1964, pp. 423-433.
17-48
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
For this example we choose the center of the X-band as our design frequency.
freq = 10e9;
vp = physconst('lightspeed');
lambda = vp/freq;
ucdx = 0.5*lambda;
ucdy = 0.5*lambda;
Create a thin dipole of length slightly less than and assign it as the exciter to a infinitely large
reflector.
d = dipole;
d.Length = 0.495*lambda;
d.Width = lambda/160;
d.Tilt = 90;
d.TiltAxis = [0 1 0];
r = reflector;
r.Exciter = d;
r.Spacing = lambda/4;
r.GroundPlaneLength = inf;
r.GroundPlaneWidth = inf;
figure;
show(r);
17-49
17 Featured Examples
Calculate the isolated element pattern and the impedance of the above antenna. These results will be
used to calculate the Scan Element Pattern(SEP). This term is also known as Array Element
Pattern(AEP) or Embedded Element Pattern(EEP).
%Define az and el vectors
az = 0:2:360;
el = 90:-2:-90;
% Calculated impedance
Ziso = impedance(r,freq);
Unit Cell In the infinite array analysis the term unit cell refers to a single element in an infinite array.
The unit cell element needs a ground plane. Antennas that don't have a ground plane need to be
backed by a reflector. A representative example for each case would be dipole backed by a reflector
and a microstrip patch antenna. This example will use the dipole backed by a reflector and analyze
the impedance behavior at 10 GHz as a function of scan angle. The unit cell will have a x
cross-section.
r.GroundPlaneLength = ucdx;
r.GroundPlaneWidth = ucdy;
infArray = infiniteArray;
infArray.Element = r;
17-50
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
infArray.ScanAzimuth = 30;
infArray.ScanElevation = 45;
figure;
show(infArray);
Scan impedance The scan impedance at a single frequency and single scan angle is shown below.
scanZ = impedance(infArray,freq)
scanZ =
1.1077e+02 + 3.0010e+01i
For this example the scan impedance for the full volume of scan is calculated using 50 terms in the
double summation for the periodic Greens function to improve convergence behavior.
Scan Element Pattern /Array Element Pattern /Embedded Element Pattern The scan element
pattern (SEP) is calculated from the infinite array scan impedance, the isolated element pattern and
the isolated element impedance. The expression used is shown here[1],[2]:
load atexInfArrayScanZData
scanZ = scanZ.';
17-51
17 Featured Examples
Rg = 185;
Xg = 0;
Zg = Rg + 1i*Xg;
gs = nan(numel(el),numel(az));
for i = 1:numel(el)
for j = 1:numel(az)
gs(i,j) = 4*Rg*real(Ziso).*giso(i,j)./(abs(scanZ(i,j) + Zg)).^2;
end
end
The scan element pattern which represents a power pattern is used to build a custom antenna
element.
fieldpattern = sqrt(gs);
bandwidth = 500e6;
customAntennaInf = helperATXBuildCustomAntenna(...
fieldpattern,freq,bandwidth,az,el);
figure;
pattern(customAntennaInf,freq);
Build 21 X 21 URA
Create a uniform rectangular array (URA), with the custom antenna element, which has the scan
element pattern.
17-52
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
N = 441;
Nrow = sqrt(N);
Ncol = sqrt(N);
drow = ucdx;
dcol = ucdy;
myURA1 = phased.URA;
myURA1.Element = customAntennaInf;
myURA1.Size = [Nrow Ncol];
myURA1.ElementSpacing = [drow dcol];
Calculate the pattern in the elevation plane (specified by azimuth = 0 deg and also called the E-plane)
and azimuth plane (specified by elevation = 0 deg and called the H-plane) for the array built using
infinite array analysis.
azang_plot = -90:0.5:90;
elang_plot = -90:0.5:90;
% E-plane
Darray1_E = pattern(myURA1,freq,0,elang_plot);
Darray1_Enormlz = Darray1_E - max(Darray1_E);
% H-plane
Darray1_H = pattern(myURA1,freq,azang_plot,0);
Darray1_Hnormlz = Darray1_H - max(Darray1_H);
% Scan element pattern in both planes
DSEP1_E = pattern(customAntennaInf,freq,0,elang_plot);
DSEP1_Enormlz = DSEP1_E - max(DSEP1_E);
DSEP1_H = pattern(customAntennaInf,freq,azang_plot,0);
DSEP1_Hnormlz = DSEP1_H - max(DSEP1_H);
figure
subplot(211)
plot(elang_plot,Darray1_Enormlz,elang_plot,DSEP1_Enormlz,'LineWidth',2)
grid on
axis([min(azang_plot) max(azang_plot) -40 0]);
legend('Array Pattern, az = 0 deg','Element Pattern')
xlabel('Elevation (deg)')
ylabel('Directivity (dB)')
title('Normalized Directivity')
subplot(212)
plot(azang_plot,Darray1_Hnormlz,azang_plot,DSEP1_Hnormlz,'LineWidth',2)
grid on
axis([min(azang_plot) max(azang_plot) -40 0]);
legend('Array Pattern, el = 0 deg','Element Pattern')
xlabel('Azimuth (deg)')
ylabel('Directivity (dB)')
17-53
17 Featured Examples
To understand the effect of the finite size of the array, we execute a full wave analysis of a 21 X 21
dipole array backed by an infinite reflector. The full wave array pattern slices in the E and H planes
as well as the center element embedded element pattern is also calculated. This data is loaded from a
MAT file. This analysis took approximately 630 seconds on a 2.4 GHz machine with 32 GB memory.
Load Full Wave Data and Build Custom Antenna Load the finite array analysis data, and use the
embedded element pattern to build a custom antenna element. Note that the pattern from the full-
wave analysis needs to be rotated by 90 degrees so that it lines up with the URA model built on the
YZ plane.
load atexInfArrayDipoleRefArray
elemfieldpatternfinite = sqrt(FiniteArrayPatData.ElemPat);
arraypatternfinite = FiniteArrayPatData.ArrayPat;
bandwidth = 500e6;
customAntennaFinite = helperATXBuildCustomAntenna(...
elemfieldpatternfinite,freq,bandwidth,az,el);
figure
pattern(customAntennaFinite,freq)
17-54
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
Create Uniform Rectangular Array with Embedded Element Pattern As done before create a
uniform rectangular array with the custom antenna element.
myURA2 = phased.URA;
myURA2.Element = customAntennaFinite;
myURA2.Size = [Nrow Ncol];
myURA2.ElementSpacing = [drow dcol];
E and H Plane Slice - Array With Embedded Element Pattern Calculate the pattern slices in two
orthogonal planes - E and H for the array with the embedded element pattern and the embedded
element pattern itself. In addition since the full wave data for the array pattern is also available use
this to compare results. E-plane
Darray2_E = pattern(myURA2,freq,0,elang_plot);
Darray2_Enormlz = Darray2_E - max(Darray2_E);
% H-plane
Darray2_H = pattern(myURA2,freq,azang_plot,0);
Darray2_Hnormlz = Darray2_H - max(Darray2_H);
DSEP2_E = pattern(customAntennaFinite,freq,0,elang_plot);
DSEP2_Enormlz = DSEP2_E - max(DSEP2_E);
DSEP2_H = pattern(customAntennaFinite,freq,azang_plot,0);
DSEP2_Hnormlz = DSEP2_H - max(DSEP2_H);
17-55
17 Featured Examples
azang_plot1 = -90:2:90;
elang_plot1 = -90:2:90;
Darray3_E = FiniteArrayPatData.EPlane;
Darray3_Enormlz = Darray3_E - max(Darray3_E);
Darray3_H = FiniteArrayPatData.HPlane;
Darray3_Hnormlz = Darray3_H - max(Darray3_H);
Comparison of Array Patterns The array patterns in the two orthogonal planes are plotted here.
figure
subplot(211)
plot(elang_plot,Darray1_Enormlz,elang_plot,Darray2_Enormlz,...
elang_plot1,Darray3_Enormlz,'LineWidth',2)
grid on
axis([min(elang_plot) max(elang_plot) -40 0]);
legend('Infinite','Finite','Finite Full wave','location','best')
xlabel('Elevation (deg)')
ylabel('Directivity (dB)')
title('E-plane (az=0 deg) Normalized Array Directivity')
subplot(212)
plot(azang_plot,Darray1_Hnormlz,azang_plot,Darray2_Hnormlz,...
azang_plot1,Darray3_Hnormlz,'LineWidth',2)
grid on
axis([min(azang_plot) max(azang_plot) -40 0]);
legend('Infinite','Finite','Finite Full wave','location','best')
xlabel('Azimuth (deg)')
ylabel('Directivity (dB)')
title('H-Plane (el = 0 deg) Normalized Array Directivity')
17-56
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
The pattern plots in the two planes reveals that all three analysis approaches suggest similar
behavior out to +/-40 degree from boresight. Beyond this range, it appears that using the scan
element pattern for all elements in a URA underestimates the sidelobe level compared to the full
wave analysis of a finite array. The one possible reason for this could be the edge effect from the
finite size array.
Comparison of Element Patterns The element patterns from the infinite array analysis and the
finite array analysis are compared here.
figure
subplot(211)
plot(elang_plot,DSEP1_Enormlz,elang_plot,DSEP2_Enormlz,'LineWidth',2)
grid on
axis([min(azang_plot) max(azang_plot) -40 0]);
legend('Infinite','Finite','location','best')
xlabel('Elevation (deg)')
ylabel('Directivity (dB)')
title('E-plane (az=0 deg) Normalized Element Directivity')
subplot(212)
plot(azang_plot,DSEP1_Hnormlz,azang_plot,DSEP2_Hnormlz,'LineWidth',2)
grid on
axis([min(azang_plot) max(azang_plot) -40 0]);
legend('Infinite','Finite','location','best')
xlabel('Azimuth (deg)')
ylabel('Directivity (dB)')
title('H-Plane (el = 0 deg) Normalized Element Directivity')
17-57
17 Featured Examples
Scan the array based on the infinite array scan element pattern in the elevation plane defined by
azimuth = 0 deg and plot the normalized directivity. Also, overlay the normalized scan element
pattern.
helperATXScanURA(myURA1,freq,azang_plot,elang_plot,...
DSEP1_Enormlz,DSEP1_Hnormlz);
17-58
Modeling Mutual Coupling in Large Arrays Using Infinite Array Analysis
Note the overall shape of the normalized array pattern approximately follows the normalized scan
element pattern. This is also predicted by the pattern multiplication principle.
Conclusion
The infinite array analysis is one of the tools deployed to analyze and design large finite arrays. The
analysis assumes that all elements are identical, that edge effects can be ignored and have uniform
excitation amplitude The isolated element pattern is replaced with the scan element pattern which
includes the effect of mutual coupling.
Reference
[1] J. Allen, "Gain and impedance variation in scanned dipole arrays," IRE Transactions on Antennas
and Propagation, vol.10, no.5, pp.566-572, September 1962.
[2] R. C. Hansen, Phased Array Antennas, Chapter 7 and 8, John Wiley & Sons Inc.,2nd Edition, 1998.
17-59
17 Featured Examples
Amplitude Perturbation
This section shows how to add gain or amplitude perturbations on a uniform linear array (ULA) of 10
elements. Consider the perturbations to be statistically independent zero-mean Gaussian random
variables with standard deviation of 0.1.
N = 10;
ula = phased.ULA(N);
Create amplitude or gain perturbations by generating normally distributed random numbers with
mean of 1.
rs = rng(7);
pertStDev = 0.1;
% Overlay responses
c = 3e8; freq = c;
subplot(2,1,1);
helperCompareResponses(perturbedULA,ula,'Amplitude Perturbation', ...
{'Perturbed','Ideal'});
17-60
Modeling Perturbations and Element Failures in a Sensor Array
Phase Perturbation
This section shows how to add phase perturbations to the ULA antenna used in the previous section.
Consider the perturbations distribution to be similar to the previous section.
Release the System object and set the tapers. The tapers have a magnitude of 1 and random phase
shifts with 0 mean.
release(perturbedULA);
perturbedULA.Taper = exp(1i*randn(1,N)*pertStDev);
% Overlay responses
subplot(2,1,1);
helperCompareResponses(perturbedULA,ula,'Phase Perturbation', ...
{'Perturbed','Ideal'});
17-61
17 Featured Examples
Position Perturbation
This section shows how to perturb the positions of the ULA sensor along the three axes.
perturbedULA = phased.ConformalArray('ElementPosition',ulaPosPert,...
'ElementNormal',zeros(2,N));
% Overlay responses
clf;
helperCompareResponses(perturbedULA,phased.ULA(N), ...
'Position Perturbation', {'Perturbed','Ideal'});
17-62
Modeling Perturbations and Element Failures in a Sensor Array
viewArray(perturbedULA);
title('Element Positions');
17-63
17 Featured Examples
Pattern Perturbation
This section will replace the isotropic antenna elements with perturbed patterns.
Here, map the 10 patterns in the cell array 'element' to the 10 sensors using the ElementIndices
property.
perturbedULA = phased.HeterogeneousULA('ElementSet',element, ...
'ElementIndices',1:N);
17-64
Modeling Perturbations and Element Failures in a Sensor Array
% Show the perturbed pattern response next to the ideal isotropic pattern
subplot(2,2,3);
pattern(ula.Element,freq,'CoordinateSystem','polar','Type','power')
title('Isotropic pattern');
subplot(2,2,4);
pattern(element{1},freq,'CoordinateSystem','polar','Type','power')
title('Perturbed pattern');
Element Failures
This section will model element failures on an 8 by 10 uniformly rectangular array. Each element has
10 percent probability of failing.
Failures can be modeled by setting the gain on the corresponding sensor to 0. Here a matrix is
created in which each element has a 10 percent probability of being 0.
ura.Taper = double(rand(8,10) > .1);
Compare the response of the array with failed elements to the ideal array.
% Overlay responses
clf;
17-65
17 Featured Examples
Notice how deep nulls are hard to attain in the response of the array with failed elements.
viewArray(ura,'ShowTaper',true);
title('Failed Elements');
17-66
Modeling Perturbations and Element Failures in a Sensor Array
Summary
This example showed how to model different kind of perturbations as well as element failures. It also
demonstrated the effect on the array response for all the cases.
17-67
17 Featured Examples
The FMCW antenna array is intended for a forward radar system designed to look for and prevent a
collision. Therefore, A cosine antenna pattern is an appropriate choice for the initial design since it
does not radiate any energy backwards. Assume that the radar system operates at 77 GHz with a 700
MHz bandwidth.
fc = 77e9;
fmin = 73e9;
fmax = 80e9;
vp = physconst('lightspeed');
lambda = vp/fc;
cosineantenna = phased.CosineAntennaElement;
cosineantenna.FrequencyRange = [fmin fmax];
pattern(cosineantenna,fc)
17-68
Patch Antenna Array for FMCW Radar
The array itself needs to be mounted on or around the front bumper. The array configuration we
investigate is a 2 X 4 rectangular array, similar to what is mentioned in [1]. Such a design has bigger
aperture along azimuth direction thus providing better azimuth resolution.
Nrow = 2;
Ncol = 4;
fmcwCosineArray = phased.URA;
fmcwCosineArray.Element = cosineantenna;
fmcwCosineArray.Size = [Nrow Ncol];
fmcwCosineArray.ElementSpacing = [0.5*lambda 0.5*lambda];
pattern(fmcwCosineArray,fc)
17-69
17 Featured Examples
The Antenna Toolbox has several antenna elements that could provide hemispherical coverage and
resemble a pattern of cosine shape. Choose a patch antenna element with typical radiator
dimensions. The patch length is approximately half-wavelength at 77 GHz and the width is 1.5 times
the length to improving the bandwidth. The ground plane is on each side and the feed offset from
center in the direction of the patch length is about a quarter of the length.
patchElement = design(patchMicrostrip,fc);
Because the default patch antenna geometry has its maximum radiation directed towards zenith,
rotate the patch antenna by 90 degrees about the y-axis so that the maximum would now occur along
the x-axis.
patchElement.Tilt = 90;
patchElement.TiltAxis = [0 1 0];
Plot the pattern of the patch antenna at 77 GHz. The patch is a medium gain antenna with the peak
directivity around 6-9 dBi.
myFigure = gcf;
myFigure.Color = 'w';
pattern(patchElement,fc)
17-70
Patch Antenna Array for FMCW Radar
The patch is radiating in the correct mode with a pattern maximum at 0 degrees azimuth and 0
degrees elevation. Since the initial dimensions are approximations, it is important to verify the
antenna's input impedance behavior.
Numfreqs = 21;
freqsweep = unique([linspace(fmin,fmax,Numfreqs) fc]);
impedance(patchElement,freqsweep);
17-71
17 Featured Examples
According to the figure, the patch antenna has its first resonance (parallel resonance) at 74 GHz. It is
a common practice to shift this resonance to 77 GHz by scaling the length of the patch.
act_resonance = 74e9;
lambda_act = vp/act_resonance;
scale = lambda/lambda_act;
patchElement.Length = scale*patchElement.Length;
Next is to check the reflection coefficient of the patch antenna to confirm a good impedance match. It
is typical to consider the value as a threshold value for determining the antenna
bandwidth.
s = sparameters(patchElement,freqsweep);
rfplot(s,'m-.')
hold on
line(freqsweep/1e9,ones(1,numel(freqsweep))*-10,'LineWidth',1.5)
hold off
17-72
Patch Antenna Array for FMCW Radar
The deep minimum at 77 GHz indicates a good match to 50. The antenna bandwidth is slightly
greater than 1 GHz. Thus, the frequency band is from 76.5 GHz to 77.5 GHz.
Finally, check if the pattern at the edge frequencies of the band meets the design. This is a good
indication whether the pattern behaves the same across the band. The patterns at 76.5 GHz and 77.6
GHz are shown below.
pattern(patchElement,76.5e9)
17-73
17 Featured Examples
pattern(patchElement,77.6e9)
17-74
Patch Antenna Array for FMCW Radar
In general, it is a good practice to check pattern behavior over the frequency band of interest.
Next, creates a uniform rectangular array (URA) with the patch antenna. The spacing is chosen to be
, where is the wavelength at the upper frequency of the band (77.6 GHz).
fc2 = 77.6e9;
lambda_fc2 = vp/77.6e9;
fmcwPatchArray = phased.URA;
fmcwPatchArray.Element = patchElement;
fmcwPatchArray.Size = [Nrow Ncol];
fmcwPatchArray.ElementSpacing = [0.5*lambda_fc2 0.5*lambda_fc2];
The following figure shows the pattern of the resulting patch antenna array. The pattern is computed
using a 5 degree separation in both azimuth and elevation.
az = -180:5:180;
el = -90:5:90;
clf
pattern(fmcwPatchArray,fc,az,el)
17-75
17 Featured Examples
Plots below compare the pattern variation in 2 orthogonal planes for the patch antenna array and the
cosine element array. Note that both arrays ignore mutual coupling effect.
patternAzimuth(fmcwPatchArray,fc)
hold on
patternAzimuth(fmcwCosineArray,fc)
p = polarpattern('gco');
p.LegendLabels = {'Patch','Cosine'};
17-76
Patch Antenna Array for FMCW Radar
clf
patternElevation(fmcwPatchArray,fc)
hold on
patternElevation(fmcwCosineArray,fc)
p = polarpattern('gco');
p.LegendLabels = {'Patch','Cosine'};
17-77
17 Featured Examples
The figures show that both arrays have similar pattern behavior around the main beam in the
elevation plane (azimuth = 0 deg). The patch-element array has a significant backlobe as compared to
the cosine-element array.
Conclusions
This example starts the design of an antenna array for FMCW radar with an ideal cosine antenna and
then uses a patch antenna to form the real array. The example compares the patterns from the two
arrays to show the design tradeoff. From the comparison, it can be seen that using the isolated patch
element is a useful first step in understanding the effect that a realistic antenna element would have
on the array pattern.
However, analysis of realistic arrays must also consider mutual coupling effect. Since this is a small
array (8 elements in 2x4 configuration), the individual element patterns in the array environment
could be distorted significantly. As a result it is not possible to replace the isolated element pattern
with an embedded element pattern, as shown in the “Modeling Mutual Coupling in Large Arrays
Using Embedded Element Pattern” on page 17-38 example. A full-wave analysis must be performed to
understand the effect of mutual coupling on the overall array performance.
Reference
[1] R. Kulke, et al. 24 GHz Radar Sensor Integrates Patch Antennas, EMPC 2005 http://
empire.de/main/Empire/pdf/publications/2005/26-doc-empc2005.pdf
17-78
Phased Array Gallery
Linear Arrays
Linear antenna arrays can have uniform or nonuniform spacing between elements. This most common
linear antenna array is the Uniform Linear Array (ULA).
A Minimum Redundancy Linear Array (MRLA) is an example of nonuniformly spaced linear array. The
MRLA minimizes the number of element pairs that have the same spatial correlation lag. It is possible
to design a 4-element array whose aperture is equivalent to 7-element ULA.
N = 4; % Number of elements
pos = zeros(3,N);
pos(2,:) = [-1.5 -1 0.5 1.5]; % Aperture equivalent to 7-element ULA
mrla = phased.ConformalArray('ElementPosition',pos,...
17-79
17 Featured Examples
'ElementNormal',zeros(2,N));
Circular Arrays
Circular antenna arrays can also have uniform or nonuniform spacing between elements. Next is an
example of a Uniform Circular Array (UCA).
17-80
Phased Array Gallery
Multiple circular antenna arrays with the same number of elements and different radii form a
concentric circular array.
17-81
17 Featured Examples
Planar antenna arrays can have a uniform grid (or lattice) and different boundary shapes. Next is an
example of a Uniform Rectangular Array (URA) with a rectangular grid and rectangular boundary.
17-82
Phased Array Gallery
You can also model a planar antenna array with a circular boundary. The following code starts with a
URA and removes elements outside a circle.
17-83
17 Featured Examples
17-84
Phased Array Gallery
viewArray(hexagonalPlanarArray,...
'Title','Hexagonal Planar Array with Rectangular Grid');
17-85
17 Featured Examples
Triangular grids provide an efficient spatial sampling and are widely used in practice. Here again,
different boundary geometries can be applied. First is a rectangular array with a triangular lattice.
viewArray(rectArrayTriGrid,...
'Title','Rectangular Array with Triangular Grid');
17-86
Phased Array Gallery
viewArray(circularPlanarArrayTriGrid,...
'Title','Circular Planar Array with Triangular Grid');
17-87
17 Featured Examples
viewArray(ellipticalPlanarArrayTriGrid,...
'Title','Elliptical Planar Array with Triangular Grid');
17-88
Phased Array Gallery
17-89
17 Featured Examples
Thinned Arrays
You can also model planar antenna arrays with nonuniform grids. Next is an example of a thinned
antenna array.
viewArray(thinnedURA,'Title','Thinned Array');
17-90
Phased Array Gallery
You can also model nonplanar arrays. In many applications, sensors must conform to the shape of the
curved surface they are mounted on. Next is an example of an antenna array whose elements are
uniformly distributed on a hemisphere.
R = 2; % Radius (m)
az = -90:10:90; % Azimuth angles
el = -80:10:80; % Elevation angles (excluding poles)
[az_grid, el_grid] = meshgrid(az,el);
poles = [0 0; -90 90]; % Add south and north poles
nDir = [poles [az_grid(:) el_grid(:)]']; % Element normal directions
N = size(nDir,2); % Number of elements
[x, y, z] = sph2cart(degtorad(nDir(1,:)), degtorad(nDir(2,:)),R*ones(1,N));
hemisphericalConformalArray = phased.ConformalArray(...
'ElementPosition',[x; y; z],'ElementNormal',nDir);
viewArray(hemisphericalConformalArray,...
'Title','Hemispherical Conformal Array');
view(90,0)
17-91
17 Featured Examples
Subarrays
You can model and visualize subarrays. Next is an example of contiguous subarrays.
replicatedURA = phased.ReplicatedSubarray('Subarray',phased.URA(5),...
'Layout','Rectangular',...
'GridSize',[3 3],'GridSpacing','Auto');
17-92
Phased Array Gallery
You can lay out subarrays on a nonuniform grid. The next example models the failure of a T/R module
for one subarray.
Ns = 6; % Number of subarrays
posc = zeros(3,Ns);
posc(2,:) = -5:2.5:7.5; % Subarray phase centers
posc(:,3) = []; % Take out 3rd subarray to model T/R failure
defectiveSubarray = phased.ReplicatedSubarray(...
'Subarray',phased.URA([25 5]),...
'Layout','Custom',...
'SubarrayPosition',posc, ...
'SubarrayNormal',zeros(2,Ns-1));
viewArray(defectiveSubarray,'Title','Defective Subarray');
view(90,0)
17-93
17 Featured Examples
viewArray(overlappedSubarray,'Title','Overlapped Subarrays');
set(gca,'CameraViewAngle',4.65);
17-94
Phased Array Gallery
In certain space-constrained applications, such as on satellites, multiple antenna arrays must share
the same space. Groups of elements are interleaved, interlaced or interspersed. The next example
models interleaved, non-overlapped subarrays.
N = 20;
idx = reshape(randperm(N*N),N,N);
sel = zeros(N,N*N);
for i =1:N,
sel(i,idx(i,:)) = 1;
end
interleavedArray = phased.PartitionedArray('Array',phased.URA(N),...
'SubarraySelection',sel);
viewArray(interleavedArray,'Title','Interleaved Arrays');
17-95
17 Featured Examples
Another type of nonplanar antenna array is an array with multiple planar faces. The next example
shows uniform hexagonal arrays arranged as subarrays on a sphere.
R = 9; % Radius (m)
az = unigrid(-180,60,180,'[)'); % Azimuth angles
el = unigrid(-30,60,30); % Elevation angles (excluding poles)
[az_grid, el_grid] = meshgrid(az,el);
poles = [0 0; -90 90]; % Add south and north poles
nDir = [poles [az_grid(:) el_grid(:)]']; % Subarray normal directions
N = size(nDir,2); % Number of subarrays
[x, y, z] = sph2cart(degtorad(nDir(1,:)), degtorad(nDir(2,:)),R*ones(1,N));
sphericalHexagonalSubarray = phased.ReplicatedSubarray('Subarray',uha,...
'Layout','Custom',...
'SubarrayPosition',[x; y; z], ...
'SubarrayNormal',nDir);
viewArray(sphericalHexagonalSubarray,...
'Title','Hexagonal Subarrays on a Sphere');
view(30,0)
17-96
Phased Array Gallery
You can also view the array from a different angle and interactively rotate it in 3-D.
view(0,90)
rotate3d on
17-97
17 Featured Examples
17-98
Using Pilot Calibration to Compensate For Array Uncertainties
Introduction
In principle, one can easily design an ideal uniform linear array (ULA) to perform array processing
tasks such as beamforming or direction of arrival estimation. In practice, there is no such thing as an
ideal array. For example, there will always be some inevitable manufacturing tolerances among
different elements within the array. Since in general it is impossible to obtain exact knowledge about
those variations, they are often referred to as uncertainties or perturbations. Commonly observed
uncertainties include element gain and element phase uncertainties (electrical uncertainties) as well
as element location uncertainties (geometrical uncertainties).
The presence of uncertainties in an array system causes rapid degradation in the detection,
resolution, and estimation performance of array processing algorithms. Therefore it is critical to
calibrate the array before its deployment. In addition to the aforementioned factors, uncertainties can
also arise due to other factors such as hardware aging and environmental effects. Calibration is
therefore also performed on a regular basis in all deployed systems.
There are many array calibration algorithms. This example focuses on the pilot calibration approach
[1], where the uncertainties are estimated from the response of the array to one or more known
external sources at known locations. The example compares the effect of uncertainties on the array
performance before and after the calibration.
Consider an ideal 6-element ULA along y-axis operating with half-wavelength spacing and uniform
tapering. For a ULA, the expected element positions and tapers can be computed.
N = 6;
designed_pos = [zeros(1,N);(0:N-1)*0.5;zeros(1,N)];
designed_taper = ones(N,1);
Next, model the perturbations that may exist in a real array. These are usually modeled as random
variables. For example, assume that the taper's magnitude and phase are perturbed by normally
distributed random variables with standard deviations of 0.1 and 0.05, respectively.
rng(2014);
taper = (designed_taper + 0.1*randn(N,1)).*exp(1i*0.05*randn(N,1));
The following figure shows the difference between the magnitude and phase of the perturbed taper
and the designed taper.
helperCompareArrayProperties('Taper',taper,designed_taper,...
{'Perturbed Array','Designed Array'});
17-99
17 Featured Examples
Perturbations in the sensor locations in the x,y, and z directions are generated similarly with a
standard deviation of 0.05.
The figure below shows where the element positions of the perturbed array and the ideal array.
helperCompareArrayProperties('Position',pos,designed_pos,...
{'Perturbed Array','Designed Array'});
17-100
Using Pilot Calibration to Compensate For Array Uncertainties
The previous section shows the difference between the designed, ideal array and the real, perturbed
array. Because of these errors, if one blindly applies processing steps, such as beamforming weights,
computed using the designed array, on the perturbed array, the performance degrades significantly.
Consider the case of an LCMV beamformer designed to steer the ideal array to a direction of 10
degrees azimuth with two interferences from two known directions of -10 degrees azimuth and 60
degrees azimuth. The goal is to preserve the signal of interest while suppressing the interferences.
If the precise knowledge of the array's taper and geometry is known, the beamforming weights can
be computed as follows:
% Generate 10K samples from target and interferences with 30dB SNR
az = [-10 10 60];
Nsamp = 1e4;
ncov = db2pow(-30);
[~,~,rx_cov] = sensorsig(pos,Nsamp,az,ncov,'Taper',taper);
However, since the array contains unknown perturbations, beamforming weights must be computed
based on the positions and taper of the designed array.
17-101
17 Featured Examples
designed_sv = steervec(designed_pos,az);
designed_w = lcmvweights(bsxfun(@times,designed_taper,designed_sv),...
[0;1;0],rx_cov);
The following figure compares the expected beam pattern with the pattern resulted from applying the
designed weights on the perturbed array.
helperCompareBeamPattern(pos,taper,w,designed_w,-90:90,az,...
{'Expected Pattern','Uncalibrated Pattern'},...
'Beam Pattern Before Calibration');
From the plotted patterns, it is clear that the pattern resulted from the uncalibrated weights does not
satisfy the requirements. It puts a null around the desired 10 degrees azimuth direction. This means
that the desired signal can no longer be retrieved. Fortunately, array calibration can help bring the
pattern back to order.
Pilot Calibration
There are many algorithm available to perform array calibration. One class of commonly used
algorithms is pilot calibration. The algorithm sets up several sources in known directions and then
uses the array to receive the signal from those transmitters. Because these transmitters are at the
known directions, the expected received signal of the ideal array can be computed. Comparing these
with the actual received signal, it is possible to derive the difference due to the uncertainties and
correct them.
The code below shows the process of array calibration. First, the pilot sources need to be chosen at
different directions. Note that the number of pilot sources determines how many uncertainties the
17-102
Using Pilot Calibration to Compensate For Array Uncertainties
algorithm can correct. In this example, to correct both sensor location uncertainties and taper
uncertainty, a minimum of four external sources is required. If more sources are used, the estimation
will improve.
The four pilot sources are located at the following azimuth and elevation angle pairs: (-60, -10), (-5,
0), (5, 0), and (40, 30). The received signal from these pilots can be simulated as
for m = size(pilot_ang,2):-1:1
calib_sig(:,:,m) = sensorsig(pos,Nsamp,pilot_ang(:,m),...
ncov,'Taper',taper);
end
Using the received signal from the pilots at the array, together with the element positions and tapers
of the designed array, the calibration algorithm [1] estimates the element positions and tapers for the
perturbed array.
[est_pos,est_taper] = pilotcalib(designed_pos,...
calib_sig,pilot_ang,designed_taper);
Once the estimated positions and taper are available, these can be used in place of the designed
array parameters when calculating beamformer weights. This results in the array pattern
represented by the red line below.
corrected_w = lcmvweights(bsxfun(@times,est_taper,...
steervec(est_pos,az)),[0;1;0],rx_cov);
helperCompareBeamPattern(pos,taper,...
w,corrected_w,-90:90,az,...
{'Expected Pattern','Calibrated Pattern'},...
'Beam Pattern After Calibration');
17-103
17 Featured Examples
The figure above shows that the pattern resulting from the calibrated array is much better than the
one from the uncalibrated array. In particular, signals from the desired direction are now preserved.
Summary
This example shows how uncertainties of an array can impact its response pattern and in turn
degrade the array's performance. The example also illustrates how pilot calibration can be used to
help restore the array performance.
References
[1] N. Fistas and A. Manikas, "A New General Global Array Calibration Method", IEEE Proceedings of
ICASSP, Vol. IV, pp. 73-76, April 1994.
17-104
Using Self Calibration to Accommodate Array Uncertainties
Introduction
In theory, one can design a perfect uniform linear array (ULA) to perform all sorts of processing such
as beamforming or direction of arrival estimation. Typically this array will be calibrated in a
controlled environment before being deployed. However, uncertainties may arise in the system during
operation indicating that the array needs recalibrating. For instance, environmental effects may
cause array element positions to become perturbed, introducing array shape uncertainties. The
presence of these uncertainties causes rapid degradation in the detection, resolution and estimation
performance of array processing algorithms. It is therefore critical to remove these array
uncertainties as soon as possible.
There are many array calibration algorithms. This example focuses on one class of them, self
calibration (also called auto-calibration), where uncertainties are estimated jointly with the positions
of a number of external sources at unknown locations [1]. Unlike pilot calibration, this allows an
array to be re-calibrated in a less known environment. However, in general, this results in a small
number of signal observations with a large number of unknowns. There are a number of approaches
to solving this problem as described in [2]. One is to construct and optimize against a cost function.
These cost functions tend to be highly non-linear and contain local minima. In this example, a cost
function based on the Multiple Signal Classification (MUSIC) algorithm [3] is formed and solved as an
fmincon optimization problem using Optimization Toolbox (TM). In the literature, many other
combinations also exist [2].
A Perfect Array
Consider first a 5-element ULA operating with half wavelength spacing is deployed. In such an array,
the element positions can be readily computed.
N = 5;
designed_pos = [zeros(1,N);-(N-1)/2:(N-1)/2;zeros(1,N)]*0.5;
Next, assume the array is perturbed whilst in operation and so undergoes array shape uncertainties
in the x and y dimensions. In order to fix the global axes, assume that the first sensor and the
direction to the second sensor is known as prescribed in [4].
rng default
pos_std = 0.02;
perturbed_pos = designed_pos + pos_std*[randn(2,N);zeros(1,N)];
perturbed_pos(:,1) = designed_pos(:,1); % The reference sensor has no
% uncertainties
perturbed_pos(1,2) = designed_pos(1,2); % The x axis is fixed down by
% assuming the x-location of
% another sensor is known
The figure below shows the difference between the deployed and the perturbed array.
17-105
17 Featured Examples
helperCompareArrayProperties('Position',perturbed_pos,designed_pos,...
{'Perturbed Array','Deployed Array'});
view(90,90);
The previous section shows the difference between the deployed array and an array which has
undergone perturbations while in operation. If one blindly uses the processing designed for the
deployed array, the performance of the array reduces. For example, consider a beamscan estimator is
used to estimate the directions of 3 unknown sources at -20, 40 and 85 degrees azimuth.
% Generate 100K samples with 30dB SNR
ncov = db2pow(-30);
Nsamp = 1e5; % Number of snapshots (samples)
incoming_az = [-20,40,85]; % Unknown source locations to be estimated
M = length(incoming_az);
[x_pert,~,Rxx] = sensorsig(perturbed_pos,Nsamp,incoming_az,ncov);
incoming_az
incoming_az = 1×3
17-106
Using Self Calibration to Accommodate Array Uncertainties
-20 40 85
estimated_az
estimated_az = 1×3
-19 48 75
These uncertainties degrade the array performance. Self calibration can allow the array to be re-
calibrated using sources of opportunity, without needing to know their locations.
Self Calibration
A number of self calibration approaches are based on optimizing a cost function to jointly estimate
unknown array and source parameters (such as array sensor and source locations). The cost function
and optimization algorithm must be carefully chosen to encourage a global solution to be reached as
easily and quickly as possible. In addition, parameters associated with the optimization algorithm
must be tuned for the given scenario. A number of combinations of cost function and optimization
algorithm exist in the literature. For this example scenario, a MUSIC cost function [3] is chosen
alongside an fmincon optimization algorithm. As the scenario changes, it may be appropriate to adapt
the approach used depending upon the robustness of the calibration algorithm. For instance, in this
example, the performance of the calibration algorithm drops as sources move away from end-fire or
the number of array elements increase. The initial estimates of the source locations estimated
previously are used as the initialization criterion of the optimization procedure.
fun = @(x_in)helperMUSICIteration(x_in,Rxx,designed_pos);
nvars = 2*N - 3 + M; % Assuming 2D uncertainties
x0 = [0.1*randn(1,nvars-M),estimated_az]; % Initial value
locTol = 0.1; % Location tolerance
angTol = 20; % Angle tolerance
lb = [-locTol*ones(nvars-M,1);estimated_az.'-angTol]; % lower bound
ub = [locTol*ones(nvars-M,1);estimated_az.'+angTol]; % upper bound
options = optimoptions('fmincon','TolCon',1e-6,'DerivativeCheck','on',...
'Display','off');
[x,fval,exitflag] = fmincon(fun,x0,[],[],[],[],lb,ub,[],options);
helperCompareArrayProperties('Position',perturbed_pos,perturbed_pos_est,...
{'Perturbed Array','Calibrated Array'});
view(90,90);
17-107
17 Featured Examples
polarplot(deg2rad(incoming_az),[1 1 1],'s',...
deg2rad(postcal_estimated_az(1,:)),[1 1 1],'+',...
deg2rad(estimated_az),[1 1 1],'o','LineWidth',2,'MarkerSize',10)
legend('True directions','DOA after calibration',...
'DOA before calibration','Location',[0.01 0.02 0.3 0.13])
rlim([0 1.3])
17-108
Using Self Calibration to Accommodate Array Uncertainties
By performing this calibration process the accuracy of the source estimation has improved
significantly. In addition, the positions of the perturbed sensors have also been estimated which can
be used as the new array geometry in the future.
Summary
This example shows how array shape uncertainties can impact the ability to estimate the direction of
arrival of unknown sources. The example also illustrates how self calibration can be used to overcome
the effects of these perturbations and estimate these uncertainties simultaneously.
References
[2] E Tuncer and B Friedlander. Classical and Modern Direction-of-Arrival Estimation. Elsevier, 2009.
[3] Schmidt, R. O. "Multiple Emitter Location and Signal Parameter Estimation." IEEE Transactions
on Antennas and Propagation. Vol. AP-34, March, 1986, pp. 276-280.
[4] Y. Rockah and P. M. Schultheiss. Array shape calibration using sources in unknown locations- Part
I: Farfield sources. IEEE Trans. ASSP, 35:286-299, 1987.
17-109
17 Featured Examples
Introduction
Phased array antennas provide many benefits over traditional dish antennas. The elements of phased
array antennas are easier to manufacture; the entire system suffers less from component failures;
and best of all, can be electronically scanned toward different directions.
However, such flexibility does not come for free. Taking full advantage of a phased array requires
placing steering circuitry and T/R switches behind each individual element. For applications that
require large arrays with thousands or tens of thousands of elements, the cost of doing so is too high
to be practical. In addition, in many such applications, the desired performance does not require full
degree of freedom from the array. Hence, in practice, deployed systems often use a compromised
approach. Elements are grouped into subarrays and then subarrays form the entire array. The
elements are still easy to manufacture; the entire array is still robust with respect to component
failures; in addition, T/R switches are only needed at each subarray, thus significantly reducing the
cost.
The following sections show how to model a subarray network with different configurations for two
specific applications: limited field of view (LFOV) arrays and wideband arrays.
LFOV arrays are commonly used in satellite applications. As the name suggests, an LFOV array only
scans within a very limited window, normally less than 10 degrees. Because of that, it is possible to
use subarrays and such subarrays can be placed at a spacing much larger than half of the
wavelength.
The simplest way to construct an array with subarrays is to contiguously tile the subarray. The
following code snippet constructs a 64-element ULA consists of eight 8-element ULAs. Within each
subarray, the elements are spaced by half the wavelength. Note that there is no steering capability
inside each subarray so the array can only be steered using subarrays.
fc = 3e8;
c = 3e8;
antenna = phased.IsotropicAntennaElement('BackBaffled',true);
N = 64;
Nsubarray = 8;
subula = phased.ULA(N/Nsubarray,0.5*c/fc,'Element',antenna);
replarray = phased.ReplicatedSubarray('Subarray',subula,...
'GridSize',[1 Nsubarray])
replarray =
phased.ReplicatedSubarray with properties:
17-110
Subarrays in Phased Array Antennas
Next, compare the radiation pattern of this array to the radiation pattern of a 64-element ULA with
no subarrays.
refula = phased.ULA(N,0.5*c/fc,'Element',antenna);
subplot(2,1,1), pattern(replarray,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c);
title('Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2), pattern(refula,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c);
title('ULA Azimuth Cut'); axis([-90 90 -50 0]);
From the plot, it is clear that the two responses are identical at broadside. Note that even though the
subarrays are widely spaced, there is no grating lobe in the response.
17-111
17 Featured Examples
steeringvec_refula = phased.SteeringVector('SensorArray',refula,...
'PropagationSpeed',c);
wref = steeringvec_refula(fc,steerang);
subplot(2,1,1), pattern(replarray,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',w);
title('Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2), pattern(refula,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',wref);
title('ULA Azimuth Cut'); axis([-90 90 -50 0]);
In this case, the response of the reference array still retains its original shape, but this is not the case
for the subarrayed ULA. For the subarrayed ULA, although the mainlobe is correctly steered and
stands well above the sidelobes, the response clearly shows what is often referred to as quantization
lobes. The name comes from the fact that the steering is at the subarray level; hence, the required
phase shift for each element is quantized at the subarray level. This effect gets worse when the array
is steered further from the broadside. The following plots show the response after steering the arrays
toward 6 degrees off broadside.
steerang = 6;
w = steeringvec_replarray(fc,steerang);
wref = steeringvec_refula(fc,steerang);
subplot(2,1,1), pattern(replarray,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',w);
title('Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2), pattern(refula,fc,-180:180,0,'Type','powerdb',...
17-112
Subarrays in Phased Array Antennas
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',wref);
title('ULA Azimuth Cut'); axis([-90 90 -50 0]);
Therefore, when forming an LFOV, one needs to be cautious about using contiguous subarrays.
One way to compensate for quantization lobes is to add phase shifters behind each element. Although
it increases the cost, it still provides a big saving compared to the full degree of freedom array
because the T/R switches, which are the most expensive parts, only need to be implemented at the
subarray level. If there is a phase shifter behind each element, then the response becomes much
better, as shown in the following plots, assuming the phase shifters behind each element are also
configured to point each subarray toward 6 degrees off the broadside.
release(replarray);
replarray.SubarraySteering = 'Phase';
replarray.PhaseShifterFrequency = fc;
subplot(2,1,1);
pattern(replarray,fc,-180:180,0,'Type','powerdb','Weights',w,...
'CoordinateSystem','rectangular','PropagationSpeed',c,'SteerAngle',6);
title('Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2);
pattern(refula,fc,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',wref);
title('ULA Azimuth Cut'); axis([-90 90 -50 0]);
17-113
17 Featured Examples
As a side note, the element and the subarrays do not necessarily steer to the same direction. In some
applications, the elements inside the subarrays are steered toward a specific direction. The subarrays
can then be steered to slightly different directions to search the vicinity.
Although an electronically scanned array is often called a phased array, in reality, adjusting the phase
is only one way to steer the array. The phase shifters are, by nature, narrowband devices so they only
work well within a narrow band, especially for large arrays. The following figure shows the radiation
patterns when the reference array is phase steered to 30 degrees, both at the carrier frequency and 3
percent above the carrier frequency.
fsteer = [1 1.03]*fc;
steerang = 30;
release(steeringvec_refula);
wref = squeeze(steeringvec_refula(fc,steerang));
subplot(2,1,1)
pattern(refula,fsteer,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',wref);
title('ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2)
pattern(refula,fsteer,-180:180,0,'Type','powerdb',...
'CoordinateSystem','rectangular','PropagationSpeed',c,'Weights',wref);
title('ULA Azimuth Cut, Peak Zoom View'); axis([25 35 -5 0]);
17-114
Subarrays in Phased Array Antennas
It is obvious from the figure that even though the frequency offset is a mere 3 percent, the peak
location moved away from the desired direction. This is referred to as squint effect. Thus, to achieve
steering across a wideband, one needs to steer using true time delays.
The most popular way to achieve true time delay is to use cables. However, in a big array aperture
with thousands of elements, implementing the potentially huge time delay can require a lot of cables.
Hence, this approach is not only expensive, but also cumbersome. Subarrays provide a compromise
between the accuracy and feasibility. In summary, within each subarray, the steering is achieved by
the phase; and among subarrays, the steering is done by true time delays.
The simplest way to build such an array is to contiguously group the subarrays, as in previous
sections.
The following plots compare the radiation patterns at three frequencies for a subarrayed ULA. The
array is steered toward 30 degrees azimuth at the subarray level using true time delay. Again, within
each subarray, the elements are also steered toward 30 degrees azimuth. The radiation pattern is
shown at the carrier frequency, 10 percent above the carrier frequency, and 15 percent above the
carrier frequency.
steerang = 30;
fsteer = [1 1.1 1.15]*fc;
release(steeringvec_replarray);
release(steeringvec_refula);
w = squeeze(steeringvec_replarray(fsteer,steerang));
wref = squeeze(steeringvec_refula(fsteer,steerang));
17-115
17 Featured Examples
subplot(2,1,1)
pattern(replarray,fsteer,-180:180,0,'Type','powerdb',...
'PropagationSpeed',c,'CoordinateSystem','rectangular','Weights',w,...
'SteerAngle',steerang);
title('Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
subplot(2,1,2)
pattern(replarray,fsteer,-180:180,0,'Type','powerdb',...
'PropagationSpeed',c,'CoordinateSystem','rectangular','Weights',w,...
'SteerAngle',steerang);
title('Subarrayed ULA Azimuth Cut, Peak Zoom View'); axis([25 35 -5 0]);
The plots show that the squint effect has been suppressed even though the bandwidth is much wider
compared to the previous case. However, as in the LFOV case, if the required bandwidth extends to
15 percent above the carrier frequency, the radiation pattern becomes undesirable due to
quantization lobes.
One way to address this issue is to use a configuration with aperiodic subarrays. Examples of such
configurations are interlaced subarrays, overlapped subarrays, and even random subarrays. Next
example shows an interlaced subarray, where the ends of the subarray are interlaced and overlapped.
Because it is no longer formed by identical subarrays, one needs to start with a large array aperture
and partition it to achieve such configuration.
17-116
Subarrays in Phased Array Antennas
partarray = ...
phased.PartitionedArray('Array',phased.ULA(N,0.5,'Element',antenna),...
'SubarraySteering','Phase');
sel = zeros(Nsubarray,N);
Nsec = N/Nsubarray;
for m = 1:Nsubarray
if m==1
sel(m,(m-1)*Nsec+1:m*Nsec+1) = 1;
elseif m==Nsubarray
sel(m,(m-1)*Nsec:m*Nsec) = 1;
else
sel(m,(m-1)*Nsec:m*Nsec+1) = 1;
end
end
partarray.SubarraySelection = sel
partarray =
phased.PartitionedArray with properties:
steeringvec_partarray = ...
phased.SteeringVector('SensorArray',partarray,'PropagationSpeed',c);
wwa = squeeze(steeringvec_partarray(fsteer,steerang));
subplot(2,1,1);
pattern(partarray,fsteer,-180:180,0,'Type','powerdb',...
'PropagationSpeed',c,'CoordinateSystem','rectangular','Weights',wwa,...
'SteerAngle',steerang);
title('Interlaced and Overlapped Subarrayed ULA Azimuth Cut');
axis([-90 90 -50 0]);
subplot(2,1,2);
pattern(replarray,fsteer,-180:180,0,'Type','powerdb',...
'PropagationSpeed',c,'CoordinateSystem','rectangular','Weights',w,...
'SteerAngle',steerang);
title('Contiguous Subarrayed ULA Azimuth Cut'); axis([-90 90 -50 0]);
17-117
17 Featured Examples
The new radiation pattern suppresses the largest quantization lobe, achieving a gain of around 5 dB.
Higher gains can be achieved by designing a more sophisticated overlapped subarray network, but
that is outside the scope of this example.
Summary
This example shows how to model a phased array with subarrays and illustrates several practical
concerns when applying the subarray technique to applications such as LFOV arrays or wideband
scanning arrays.
Reference
[1] Robert Mailloux, Electronically Scanned Arrays, Morgan & Claypool, 2007.
17-118
Tapering, Thinning and Arrays with Different Sensor Patterns
ULA Tapering
This section shows how to apply a Taylor window on the elements of a uniform linear array (ULA) in
order to reduce the sidelobe levels.
Compare the response of the tapered to the untapered array. Notice how the sidelobes of the tapered
ULA are lower.
helperCompareResponses(taperedULA,ula, ...
'Ideal ULA versus Tapered ULA response', ...
{'Tapered','Not tapered'});
17-119
17 Featured Examples
ULA Thinning
This section shows how to model thinning using tapering. When thinning, each element of the array
has a certain probability of being deactivated or removed. The taper values can be either 0, for an
inactive element, or 1 for an active element. Here, the probability of keeping the element is
proportional to the value of the Taylor window at that element.
% Get that previously computed taper values corresponding to a Taylor
% window
taper = taperedULA.Taper;
% Apply thinning
thinnedULA = clone(ula);
thinnedULA.Taper = thinningTaper;
The following plot shows how the thinning taper values are distributed. Notice on the edges when the
window level goes down, the number of inactive sensors is up.
plot(taper)
hold on
17-120
Tapering, Thinning and Arrays with Different Sensor Patterns
plot(thinningTaper,'o')
hold off
legend('Taylor window','Thinning taper')
title('Applied Thinning Taper');xlabel('Sensor Position');
ylabel('Taylor Window Values');
Compare the response of the thinned to the ideal array. Notice how the sidelobes of the thinned ULA
are lower.
helperCompareResponses(thinnedULA,ula, ...
'Ideal ULA versus Thinned ULA response', ...
{'Thinned','Not thinned'});
17-121
17 Featured Examples
URA Tapering
This section shows how to apply a Taylor window along both dimensions of a 13 by 10 uniform
rectangular array (URA).
uraSize = [13,10];
heterogeneousURA = phased.URA(uraSize);
% Get the total taper values by multiplying the vectors of both dimensions
tap = twinz*twiny.';
viewArray(taperedURA,'Title','Tapered URA','ShowTaper',true);
17-122
Tapering, Thinning and Arrays with Different Sensor Patterns
clf
pos = getElementPosition(taperedURA);
plot3(pos(2,:),pos(3,:),taperedURA.Taper(:),'*');
title('Applied Taper');ylabel('Sensor Position');zlabel('Taper Values');
17-123
17 Featured Examples
Compare the response of the tapered to the untapered array. Notice how the sidelobes of the tapered
URA are lower.
helperCompareResponses(heterogeneousURA,taperedURA, ...
'Ideal URA versus Tapered URA response', ...
{'Not tapered','Tapered'});
17-124
Tapering, Thinning and Arrays with Different Sensor Patterns
This section shows how to apply a taper on a circular planar array with a radius of 5 meters and
distance between elements of 0.5 meters.
radius = 5; dist = 0.5;
numElPerSide = radius*2/dist;
% Get the positions of the smallest URA which could fit the circular planar
% array
pos = getElementPosition(phased.URA(numElPerSide,dist));
taperedCircularPlanarArray.Taper = taylortaperc(pos,2*radius,nbar,sll).';
View the array and plot the taper values at each sensor.
17-125
17 Featured Examples
viewArray(taperedCircularPlanarArray,...
'Title','Tapered Circular Planar Array','ShowTaper',true)
clf
plot3(pos(2,:),pos(3,:),taperedCircularPlanarArray.Taper,'*');
title('Applied Taper');ylabel('Sensor Position');zlabel('Taper Values');
17-126
Tapering, Thinning and Arrays with Different Sensor Patterns
Compare the response of the tapered to the untapered array. Notice how the sidelobes of the tapered
array are lower.
helperCompareResponses(circularPlanarArray,taperedCircularPlanarArray, ...
'Ideal versus Tapered response', ...
{'Not tapered','Tapered'});
17-127
17 Featured Examples
taper = taperedCircularPlanarArray.Taper;
randvect = rand(size(taper));
thinningTaper = zeros(size(taper));
thinningTaper(randvect<taper/max(max(taper))) = 1;
thinnedCircularPlanarArray = clone(circularPlanarArray);
thinnedCircularPlanarArray.Taper = thinningTaper;
View the array and Compare the response of the thinned to the ideal array.
viewArray(thinnedCircularPlanarArray,'ShowTaper',true)
17-128
Tapering, Thinning and Arrays with Different Sensor Patterns
clf;
helperCompareResponses(circularPlanarArray,thinnedCircularPlanarArray, ...
'Ideal versus Thinned response', ...
{'Not thinned','Thinned'});
17-129
17 Featured Examples
This section shows how to create a 13 by 10 URA with sensor patterns on the edges and corners
different than the patterns of the remaining sensors. This ability could be used to model coupling
effects.
Create three different cosine patterns with the following azimuth and elevation cosine exponents
[azim exponent, elev exponent]: [2, 2] for the edges, [4, 4] for the corners, and [1.5, 1.5] for the main
sensors.
uraSize = [13,10];
17-130
Tapering, Thinning and Arrays with Different Sensor Patterns
helperViewPatternArray(heterogeneousURA);
Compare the response of the multiple pattern array to the single pattern array.
clf;
helperCompareResponses( heterogeneousURA, ...
phased.URA(uraSize,'Element',mainAntenna), ...
'Multiple versus single pattern response', ...
{'Single Pattern','Multiple Patterns'});
17-131
17 Featured Examples
This section shows how to set the pattern of sensors located more than 4 meters from the center of
the array.
% Create a cell array which includes all the patterns
patterns = {mainAntenna, edgeAntenna};
% Get positions
pos = getElementPosition(circularPlanarArray);
% Get the indexed of the sensors more than 4 meters away from the center.
sensorIdx = find(sum(pos.^2) > 4^2);
17-132
Tapering, Thinning and Arrays with Different Sensor Patterns
helperViewPatternArray(heterogeneousCircularPlanarArray);
Compare the response of the multiple pattern array to the single pattern array.
clf;
helperCompareResponses(circularPlanarArray,...
heterogeneousCircularPlanarArray,...
'Multiple versus single pattern response',...
{'Single Pattern','Multiple Patterns'});
17-133
17 Featured Examples
Summary
This example demonstrated how to apply taper values and model thinning using taper values for
different array configurations. It also showed how to create arrays with different element patterns.
17-134
Frequency Agility in Radar, Communications, and EW Systems
Introduction
Active electronically steered phased array systems can support multiple applications using the same
array hardware. These applications may include radar, EW, and communications. However, the RF
environments that these types of systems operate in are complex and sometimes hostile. For example,
a repeater jammer can repeat the received radar signal and retransmit it to confuse the radar. In
some literature, this is also called spoofing. Frequency agility can be an effective technique to
counter the signals generated from interference sources and help support effective operations of
these systems.
In this example, we first set up a scenario with a stationary monostatic radar and a moving aircraft
target. The aircraft then generates a spoofing signal which confuses the radar. Once the radar detects
the jamming source, frequency agility techniques can be employed which allows the radar to
overcome the interference.
radar_pos = [0;0;0];
radar_vel = [0;0;0];
The radar receiver, which can also function as an EW receiver, is a 64-element (8x8) URA with half
wavelength spacing.
pattern(antenna,fc,'Type','powerdb');
17-135
17 Featured Examples
The radar transmits linear FM pulses. The transmitter and receiver specifications are:
wav = phased.LinearFMWaveform('SampleRate',fs,...
'PulseWidth', 10e-5, 'SweepBandwidth', 1e5,'PRF',4000,...
'FrequencyOffsetSource','Input port');
tx = phased.Transmitter('Gain',20,'PeakPower',500);
txArray = phased.WidebandRadiator('SampleRate', fs,...
'Sensor',antenna,'CarrierFrequency', fc);
rxArray = phased.WidebandCollector('SampleRate', fs,...
'Sensor',antenna,'CarrierFrequency', fc);
rxPreamp = phased.ReceiverPreamp('Gain',10,'NoiseFigure',5,...
'SampleRate', fs);
The environment and target are described below. Wideband propagation channels are used to allow
us to propagate waveforms with different carrier frequencies.
target = phased.RadarTarget('MeanRCS',100,'OperatingFrequency',fc);
target_pos = [8000;1000;1000];
target_vel = [100;0;0];
17-136
Frequency Agility in Radar, Communications, and EW Systems
In this example, two one-way propagation channels are used because the jamming signal only
propagates through the return channel.
rng(2017);
[tgtRng, tgtAng] = rangeangle(target_pos, radar_pos);
x = wav(0); % waveform
xt = tx(x); % transmit
xtarray = txArray(xt, tgtAng); % radiate
yp = envout(xtarray,radar_pos,target_pos,radar_vel,target_vel); % propagate
yr = target(yp); % reflect
ye = envin(yr,target_pos,radar_pos,target_vel,radar_vel); % propagate
yt = rxArray(ye,tgtAng); % collect
yt = rxPreamp(yt); % receive
We can perform a direction of arrival estimate using a 2D beam scan and used the estimated azimuth
and elevation angles to direct the beamformer.
estimator = phased.BeamscanEstimator2D('SensorArray',antenna,...
'DOAOutputPort',true,...
'OperatingFrequency', fc,...
'NumSignals',1,...
'AzimuthScanAngles',-40:40,...
'ElevationScanAngles',-60:60);
[~,doa] = estimator(yt);
beamformer = phased.SubbandPhaseShiftBeamformer('SensorArray',antenna,...
'OperatingFrequency',fc,'DirectionSource','Input port',...
'SampleRate',fs, 'WeightsOutputPort',true);
[ybf,~] = beamformer(yt,doa);
The beamformed signal can then be passed through matched filter and detector.
mfcoeff1 = getMatchedFilter(wav);
mf1 = phased.MatchedFilter('Coefficients',mfcoeff1);
y1 = mf1(ybf);
nSamples = wav.SampleRate/wav.PRF;
t = ((0:nSamples-1)-(numel(mfcoeff1)-1))/fs;
r = t*c/2;
plot(r/1000,abs(y1),'-'); grid on;
xlabel('Range (km)');
ylabel('Pulse Compressed Signal Magnitude');
17-137
17 Featured Examples
The figure shows that the target produces a dominant peak in the received signal.
The radar works very well in the above example. However, in a complex environment, interferences
can affect the radar performance. The interferences may be from other systems, such as wireless
communication signals, or jamming signals. Modern radar systems must be able to operate in such
environments.
A phased array radar can filter out interference using spatial processing. If the target and the
jamming source are not closely located in angular space, beamforming may be an effective way to
suppress the jammer. More details can be found in the “Array Pattern Synthesis” on page 17-11
example.
This example focuses on the situation where the target and the interference are closely located so
that the spatial processing cannot be used to separate the two. Consider the case where the target
aircraft can determine the characteristics of the signal transmitted from the radar and use that
information to generate a pulse that will confuse the radar receiver. This is a common technique used
in jamming or spoofing to draw the radar away from the true target.
fprintf('Waveform Characteristics:\n');
17-138
Frequency Agility in Radar, Communications, and EW Systems
Waveform Characteristics:
fprintf('Pulse width:\t\t%f\n',pw);
fprintf('PRF:\t\t\t%f\n',prf);
PRF: 4000.000000
fprintf('Sweep bandwidth:\t%f\n',bw);
The jammer needs some time to do these analysis and prepare for the jamming signal so it is hard to
create an effective spoofing signal right away, but generally within several pulses intervals the
jamming signal is ready and the jammer can put it within arbitrary position in a pulse to make the
spoofing target look closer or father compared to the true target. It is also worth noting that with the
latest hardware, the time needed to estimate the characteristics of the signal decreases dramatically.
Assume the jammer wants to put the signal at about 5.5 km out, the jammer could transmit the
jamming signal at the right moment to introduce the corresponding delay. In addition, because this is
a one way propagation from the jammer to the radar, the required power is much smaller. This is
indeed what makes jamming very effective as it does not require much power to blind the radar.
jwav = phased.LinearFMWaveform('SampleRate',fs,...
'PulseWidth',pw,'SweepBandwidth',bw,'PRF',prf);
xj = jwav();
Npad = ceil(5500/(c/fs));
xj = circshift(xj,Npad); % pad zero to introduce corresponding delay
txjam = phased.Transmitter('Gain',0,'PeakPower',5);
xj = txjam(xj);
ye = envin(yr+xj,target_pos,radar_pos,target_vel,radar_vel);
yt = rxArray(ye,tgtAng);
yt = rxPreamp(yt);
ybfj = beamformer(yt,doa);
y1j = mf1(ybfj); % Jammer plus target return
17-139
17 Featured Examples
The received signal now contains both the desired target return and the jamming signal. In addition,
the jamming signal appears to be closer. Therefore, the radar is more likely to lock on to the closest
target thinking that one is the most prominent threat and spend less resource on the true target.
One possible approach to mitigate the jamming effect at the radar receiver is to adopt a predefined
frequency hopping schedule. In this case, the waveform transmitted from radar may change carrier
frequency from time to time. Since the hopping sequence is only known to the radar, the jammer
would not be able to follow the change right away. Instead, it needs to take more time to acquire the
correct carrier frequency before a new jamming signal can be generated. It also requires more
advanced hardware on jammer to be able to handle transmission of signals over a broader bandwidth.
Thus, the frequency hop can create a time interval that radar operates without being affected by the
spoofing signal. In addition, the radar can hop again before the jammer can effectively generate the
spoofing signal.
In the following situation, assume that the transmitted signal hops 500 kHz from the original carrier
frequency of 10 GHz. Therefore, the new waveform signal becomes
deltaf = fs/4;
xh = wav(deltaf); % hopped signal
The figure below shows the spectrogram of both the original signal and the hopped signal. Note that
the hopped signal is a shift in the frequency domain compared to the original signal.
pspectrum(x+xh,fs,'spectrogram')
17-140
Frequency Agility in Radar, Communications, and EW Systems
Using similar approach outlined in earlier sections, the radar echo can be simulated using the new
waveform. Note that since jammer is not aware of this hop, the jamming signal is still the same.
xth = tx(xh);
xtharray = txArray(xth, tgtAng);
yph = envout(xtharray,radar_pos,target_pos,radar_vel,target_vel);
yrh = target(yph);
yeh = envin(yrh+xj,target_pos,radar_pos,target_vel,radar_vel);
yth = rxArray(yeh,tgtAng);
yth = rxPreamp(yth);
ybfh = beamformer(yth,doa);
Because the hopping schedule is known to the radar, the signal processing algorithm could use that
information to extract only the frequency band that around the current carrier frequency. This helps
reject the interference at other bands and also improves the SNR since the noise from other bands
are suppressed. In addition, when the waveform hops, the matched filter needs to be updated
accordingly.
Let us now apply the corresponding bandpass filters and matched filters to the received signal.
Then, we can modulate the resulting bandpass filter with a carrier to obtain the bandpass filter
around that carrier frequency.
17-141
17 Featured Examples
bf2 = buttercoef.*exp(1i*2*pi*deltaf*(0:numel(buttercoef)-1)/fs);
mfcoeff2 = getMatchedFilter(wav,'FrequencyOffset',deltaf);
mf2 = phased.MatchedFilter('Coefficients',mfcoeff2);
The figure shows that with the adoption of frequency hopping, the target echo and the jamming
signal can be separated. Since the jammer is still in the original band, only the true target echo
appears in the new frequency band where the waveform currently occupies, thus suppressing the
impact of the jammer.
Summary
This example shows that adopting frequency agility can help counter the jamming effect in a complex
RF environment. The example simulates a system with frequency hopping waveform and verifies that
this technique helps the radar system to identify the true target echo without being confused by the
jamming signal.
17-142
Simultaneous Range and Speed Estimation Using MFSK Waveform
In example “Automotive Adaptive Cruise Control Using FMCW Technology” on page 17-367, an
automotive radar system is designed to perform range estimation for an automatic cruise control
system. In the latter part of that example, a triangle sweep FMCW waveform is used to
simultaneously estimating range and speed of the target vehicle.
Although the triangle sweep FMCW waveform solves the range-Doppler coupling issue elegantly for a
single target, its processing becomes complicated in multi-target situations. Next section shows how
a triangle sweep FMCW waveform behaves when two targets are present.
The scene includes a car 50 meters away from the radar, traveling at 96 km/h along the same
direction as the radar and a truck at 55 meters away, traveling at 70 km/h in the opposite direction.
The radar itself is traveling at 60 km/h.
rng(2015);
[fmcwwaveform,target,tgtmotion,channel,transmitter,receiver,...
sensormotion,c,fc,lambda,fs,maxbeatfreq] = helperMFSKSystemSetup;
Next, simulate the radar echo from the two vehicles. The FMCW waveform has a sweep bandwidth of
150 MHz so the range resolution is 1 meter. Each up or down sweep takes 1 milliseconds so each
triangle sweep takes 2 milliseconds. Note that only one triangle sweep is needed to perform the joint
range and speed estimation.
Nsweep = 2;
xr = helperFMCWSimulate(Nsweep,fmcwwaveform,sensormotion,tgtmotion,...
transmitter,channel,target,receiver);
Although the system needs a 150 MHz bandwidth, the maximum beat frequency is much less. This
means that at the processing side, one can decimate the signal to a lower frequency to ease the
hardware requirements. The beat frequencies are then estimated using the decimated signal.
dfactor = ceil(fs/maxbeatfreq)/2;
fs_d = fs/dfactor;
fbu_rng = rootmusic(decimate(xr(:,1),dfactor),2,fs_d);
fbd_rng = rootmusic(decimate(xr(:,2),dfactor),2,fs_d);
Now there are two beat frequencies from the up sweep and two beat frequencies from the down
sweeps. Since any pair of beat frequencies from an up sweep and a down sweep can define a target,
there are four possible combinations of range and Doppler estimates yet only two of them are
associated with the real targets.
sweep_slope = fmcwwaveform.SweepBandwidth/fmcwwaveform.SweepTime;
rng_est = beat2range([fbu_rng fbd_rng;fbu_rng flipud(fbd_rng)],...
sweep_slope,c)
17-143
17 Featured Examples
rng_est = 4×1
49.9802
54.9406
64.2998
40.6210
The remaining two are what often referred to as the ghost targets. The relationship between real
targets and ghost targets can be better explained using time-frequency representation.
As shown in the figure, each intersection of a up sweep return and a down sweep return indicates a
possible target. So it is critical to distinguish between the true targets and the ghost targets. To solve
this ambiguity, one can transmit additional FMCW signals with different sweep slopes. Since only the
true targets will occupy the same intersection in the time frequency domain, the ambiguity is
resolved. However, this approach significantly increases the processing complexity as well as the
processing time needed to obtain the valid estimates.
MFSK Waveform
Multiple frequency shift keying (MFSK) waveform [1] is designed for automotive radar to achieve
simultaneous range and Doppler estimation under multiple targets situation without falling into the
trap of ghost targets. Its time frequency representation is shown in the following figure.
17-144
Simultaneous Range and Speed Estimation Using MFSK Waveform
The figure indicates that the MFSK waveform is a combination of two linear FMCW waveforms with a
fixed frequency offset. Unlike the regular FMCW waveforms, MFSK sweeps the entire bandwidth at
discrete steps. Within each step, a single frequency continuous wave signal is transmitted. Because
there are two tones within each step, it can be considered as a frequency shift keying (FSK)
waveform. Thus, there is one set of range and Doppler relation from FMCW waveform and another
set of range and Doppler relation from FSK. Combining two sets of relations together can help
resolve the coupling between range and Doppler regardless the number of targets present in the
scene.
The following sections simulates the previous example again, but uses an MFSK waveform instead.
First, parameterize the MFSK waveform to satisfy the system requirement specified in [1]. Because
the range resolution is 1 meter, the sweep bandwidth is set at 150 MHz. In addition, the frequency
offset is set at -294 kHz as specified in [1]. Each step lasts about 2 microseconds and the entire sweep
has 1024 steps. Thus, each FMCW sweep takes 512 steps and the total sweep time is a little over 2
ms. Note that the sweep time is comparable to the FMCW signal used in previous sections.
mfskwaveform = phased.MFSKWaveform(...
'SampleRate',151e6,...
'SweepBandwidth',150e6,...
'StepTime',2e-6,...
'StepsPerSweep',1024,...
'FrequencyOffset',-294e3,...
'OutputFormat','Sweeps',...
'NumSweeps',1);
The figure below shows the spectrogram of the waveform. It is zoomed into a small interval to better
reveal the time frequency characteristics of the waveform.
numsamp_step = round(mfskwaveform.SampleRate*mfskwaveform.StepTime);
sig_display = mfskwaveform();
spectrogram(sig_display(1:8192),kaiser(3*numsamp_step,100),...
17-145
17 Featured Examples
ceil(2*numsamp_step),linspace(0,4e6,2048),mfskwaveform.SampleRate,...
'yaxis','reassigned','minthreshold',-60)
Next, simulate the return of the system. Again, only 1 sweep is needed to estimate the range and
Doppler.
Nsweep = 1;
release(channel);
channel.SampleRate = mfskwaveform.SampleRate;
release(receiver);
receiver.SampleRate = mfskwaveform.SampleRate;
xr = helperFMCWSimulate(Nsweep,mfskwaveform,sensormotion,tgtmotion,...
transmitter,channel,target,receiver);
The subsequent processing samples the return echo at the end of each step and group the sampled
signals into two sequences corresponding to two sweeps. Note that the sampling frequency of the
resulting sequence is now proportional to the time at each step, which is much less compared the
original sample rate.
x_dechirp = reshape(xr(numsamp_step:numsamp_step:end),2,[]).';
fs_dechirp = 1/(2*mfskwaveform.StepTime);
As in the case of FMCW signals, the MFSK waveform is processed in the frequency domain. Next
figures shows the frequency spectrums of the received echos corresponding to the two sweeps.
xf_dechirp = fft(x_dechirp);
num_xf_samp = size(xf_dechirp,1);
17-146
Simultaneous Range and Speed Estimation Using MFSK Waveform
beatfreq_vec = (0:num_xf_samp-1).'/num_xf_samp*fs_dechirp;
clf;
subplot(211),plot(beatfreq_vec/1e3,abs(xf_dechirp(:,1)));grid on;
ylabel('Magnitude');
title('Frequency spectrum for sweep 1');
subplot(212),plot(beatfreq_vec/1e3,abs(xf_dechirp(:,2)));grid on;
ylabel('Magnitude');
title('Frequency spectrum for sweep 2');
xlabel('Frequency (kHz)')
Note that there are two peaks in each frequency spectrum indicating two targets. In addition, the
peaks are at the identical locations in both returns so there is no ghost targets.
To detect the peaks, one can use a CFAR detector. Once detected, the beat frequencies as well as the
phase differences between two spectra are computed at the peak locations.
cfar = phased.CFARDetector('ProbabilityFalseAlarm',1e-2,...
'NumTrainingCells',8);
peakidx = cfar(abs(xf_dechirp(:,1)),1:num_xf_samp);
Fbeat = beatfreq_vec(peakidx);
phi = angle(xf_dechirp(peakidx,2))-angle(xf_dechirp(peakidx,1));
Finally, the beat frequencies and phase differences are used to estimate the range and speed.
Depending on how one constructs the phase difference, the equations are slightly different. For the
17-147
17 Featured Examples
approach shown in this example, it can be shown that the range and speed satisfies the following
relation:
2v 2βR
fb = − +
λ c
where f b is the beat frequency, Δϕ is the phase difference, λ is the wavelength, c is the propagation
speed, Ts is the step time, f of f set is the frequency offset, β is the sweep slope, R is the range, and v is
the speed. Based on the equation, the range and speed are estimated below:
sweep_slope = mfskwaveform.SweepBandwidth/...
(mfskwaveform.StepsPerSweep*mfskwaveform.StepTime);
temp = ...
[1 sweep_slope;mfskwaveform.StepTime mfskwaveform.FrequencyOffset]\...
[Fbeat phi/(2*pi)].';
r_est = c*temp(2,:)/2
r_est = 1×2
54.8564 49.6452
v_est = lambda*temp(1,:)/(-2)
v_est = 1×2
36.0089 -9.8495
The estimated range and speed match the true range and speed values, as tabulated below, very well.
Summary
This example shows two simultaneous range and speed estimation approaches, using either a triangle
sweep FMCW waveform or an MFSK waveform. It is shown that MFSK waveform have an advantage
over FMCW waveform when multiple targets are present since it does not introduce ghost targets
during the processing.
References
[1] Rohling, H. and M. Meinecke. Waveform Design Principle for Automotive Radar Systems,
Proceedings of CIE International Conference on Radar, 2001.
17-148
Waveform Analysis Using the Ambiguity Function
In a radar system, the choice of a radar waveform plays an important role in enabling the system to
separate two closely located targets, in either range or speed. Therefore, it is often necessary to
examine a waveform and understand its resolution and ambiguity in both range and speed domains.
In radar, the range is measured using the delay and the speed is measured using the Doppler shift.
Thus, the range and the speed are used interchangeably with the delay and the Doppler.
Introduction
To improve the signal to noise ratio (SNR), modern radar systems often employ the matched filter in
the receiver chain. The ambiguity function of a waveform represents exactly the output of the
matched filter when the specified waveform is used as the filter input. This exact representation
makes the ambiguity function a popular tool for designing and analyzing waveforms. This approach
provides the insight of the resolution capability in both delay and Doppler domains for a given
waveform. Based on this analysis, one can then determine whether a waveform is suitable for a
particular application.
The following sections use the ambiguity function to explore the range-Doppler relationship for
several popular waveforms. To establish a comparison baseline, assume that the design specification
of the radar system requires a maximum unambiguous range of 15 km and a range resolution of 1.5
km. For the sake of simplicity, also use 3e8 m/s as the speed of light.
Rmax = 15e3;
Rres = 1500;
c = 3e8;
Based on the design specifications already mentioned, the pulse repetition frequency (PRF) and the
bandwidth of the waveform can be computed as follows.
prf = c/(2*Rmax);
bw = c/(2*Rres);
fs = 2*bw;
The simplest waveform for a radar system is probably a rectangular waveform, sometimes also
referred to as single frequency waveform. For the rectangular waveform, the pulse width is the
reciprocal of the bandwidth.
rectwaveform = phased.RectangularWaveform('SampleRate',fs,...
'PRF',prf,'PulseWidth',1/bw)
rectwaveform =
phased.RectangularWaveform with properties:
SampleRate: 200000
17-149
17 Featured Examples
Because the analysis of a waveform is always performed on full pulses, keep the OutputFormat
property as 'Pulses'. One can also check the bandwidth of the waveform using the bandwidth method.
bw_rect = bandwidth(rectwaveform)
bw_rect = 1.0000e+05
The resulting bandwidth matches the requirement. Now, generate one pulse of the waveform, and
then examine it using the ambiguity function.
wav = rectwaveform();
ambgfun(wav,rectwaveform.SampleRate,rectwaveform.PRF);
In the figure, notice that the nonzero response is occupying only about 10% of all delays, focusing in
a narrow strip around delay 0. This occurs because the waveform has a duty cycle of 0.1.
17-150
Waveform Analysis Using the Ambiguity Function
dc_rect = dutycycle(rectwaveform.PulseWidth,rectwaveform.PRF)
dc_rect = 0.1000
When investigating a waveform's resolution capability, the zero delay cut and the zero Doppler cut of
the waveform ambiguity function are often of interest.
The zero Doppler cut of the ambiguity function returns the auto-correlation function (ACF) of the
rectangular waveform. The cut can be plotted using the following command.
ambgfun(wav,rectwaveform.SampleRate,rectwaveform.PRF,'Cut','Doppler');
The zero Doppler cut of the ambiguity function depicts the matched filter response of a target when
the target is stationary. From the plot, one can see that the first null response appears at 10
microseconds, which means that this waveform could resolve two targets that are at least 10
microseconds, or 1.5 km apart. Hence, the response matches the requirement in the design
specification.
ambgfun(wav,rectwaveform.SampleRate,rectwaveform.PRF,'Cut','Delay');
17-151
17 Featured Examples
Notice that the returned zero delay response is fairly broad. The first null does not appear till at the
edge, which corresponds to a Doppler shift of 100 kHz. Thus, if the two targets are at the same
range, they need to have a difference of 100 kHz in the Doppler domain to be separated. Assuming
the radar is working at 1 GHz, according to the computation below, such a separation corresponds to
a speed difference of 30 km/s. Because this number is so large, essentially one cannot separate two
targets in the Doppler domain using this system.
fc = 1e9;
deltav_rect = dop2speed(100e3,c/fc)
deltav_rect = 30000
At this point it may be worth to mention another issue with the rectangular waveform. For a
rectangular waveform, the range resolution is determined by the pulse width. Thus, to achieve good
range resolution, the system needs to adopt very small pulse width. At the same time, the system also
needs to be able to send out enough energy to the space so that the returned echo can be reliably
detected. Hence, a narrow pulse width requires very high peak power at the transmitter. In practice,
producing such power can be very costly.
One can see from the previous section that the Doppler resolution for a single rectangular pulse is
fairly poor. In fact, the Doppler resolution for a single rectangular pulse is given by the reciprocal of
its pulse width. Recall that the delay resolution of a rectangular waveform is given by its pulse width.
Apparently, there exists a conflict of interest between range and Doppler resolutions of a rectangular
waveform.
17-152
Waveform Analysis Using the Ambiguity Function
The root issue here is that both the delay and the Doppler resolution depend on the pulse width in
opposite ways. Therefore, one way to solve this issue is to come up a waveform that decouples this
dependency. One can then improve the resolution in both domains simultaneously.
Linear FM waveform is just such a waveform. The range resolution of a linear FM waveform is no
longer depending on the pulse width. Instead, the range resolution is determined by the sweep
bandwidth.
In linear FM waveform, because the range resolution is now determined by the sweep bandwidth, the
system can afford a longer pulse width. Hence, the power requirement is alleviated. Meanwhile,
because of the longer pulse width, the Doppler resolution improves. This improvement occurs even
though the Doppler resolution of a linear FM waveform is still given by the reciprocal of the pulse
width.
Now, explore the linear FM waveform in detail. The linear FM waveform that provides the desired
range resolution can be constructed as follows.
lfmwaveform = phased.LinearFMWaveform('SampleRate',fs,...
'SweepBandwidth',bw,'PRF',prf,'PulseWidth',5/bw)
lfmwaveform =
phased.LinearFMWaveform with properties:
SampleRate: 200000
DurationSpecification: 'Pulse width'
PulseWidth: 5.0000e-05
PRF: 10000
PRFSelectionInputPort: false
SweepBandwidth: 100000
SweepDirection: 'Up'
SweepInterval: 'Positive'
Envelope: 'Rectangular'
FrequencyOffsetSource: 'Property'
FrequencyOffset: 0
OutputFormat: 'Pulses'
NumPulses: 1
PRFOutputPort: false
CoefficientsOutputPort: false
The pulse width is 5 times longer than that of the rectangular waveform used in the earlier sections
of this example. Notice that the bandwidth of the linear FM waveform is the same as the rectangular
waveform.
bw_lfm = bandwidth(lfmwaveform)
bw_lfm = 100000
The zero Doppler cut of the linear FM waveform appears in the next plot.
wav = lfmwaveform();
ambgfun(wav,lfmwaveform.SampleRate,lfmwaveform.PRF,'Cut','Doppler');
17-153
17 Featured Examples
From the preceding figure, one can see that even though the response now has sidelobes, the first
null still appears at 10 microseconds, so the range resolution is preserved.
One can also plot the zero delay cut of the linear FM waveform. Observe that the first null in Doppler
domain is now at around 20 kHz, which is 1/5 of the original rectangular waveform.
ambgfun(wav,lfmwaveform.SampleRate,lfmwaveform.PRF,'Cut','Delay');
17-154
Waveform Analysis Using the Ambiguity Function
Following the same procedure as for the rectangular waveform in the earlier sections of this example,
one can calculate that the 20 kHz Doppler separation translates to a speed difference of 6 km/s. This
resolution is 5 times better than the rectangular waveform. Unfortunately, such resolution is still
inadequate.
deltav_lfm = dop2speed(20e3,c/fc)
deltav_lfm = 6000
One may also be interested in seeing the 3-D plot of the ambiguity function for the linear FM
waveform. If you want to see a 3-D plot other than the contour format, you can just get the returned
ambiguity function and then plot is using your favorite format. For example, the following snippet
generates the surface plot of the linear FM waveform ambiguity function.
[afmag_lfm,delay_lfm,doppler_lfm] = ambgfun(wav,lfmwaveform.SampleRate,...
lfmwaveform.PRF);
surf(delay_lfm*1e6,doppler_lfm/1e3,afmag_lfm,'LineStyle','none');
axis tight; grid on; view([140,35]); colorbar;
xlabel('Delay \tau (us)');ylabel('Doppler f_d (kHz)');
title('Linear FM Pulse Waveform Ambiguity Function');
17-155
17 Featured Examples
Notice that compared to the ambiguity function of the rectangular waveform, the ambiguity function
of the linear FM waveform is slightly tilted. The tilt provides the improved resolution in the zero delay
cut. The ambiguity function of both rectangular waveform and linear FM waveform have the shape of
a long, narrow edge. This kind of ambiguity function is often termed as "knife edge" ambiguity
function.
Before proceeding to improve further the Doppler resolution, it is worth looking at an important
figure of merit used in waveform analysis. The product of the pulse width and the bandwidth of a
waveform is called the waveform's time bandwidth product. For a rectangular waveform, the time
bandwidth product is always 1. For a linear FM waveform, because of the decoupling of the
bandwidth and the pulse width, the time bandwidth can be larger than 1. The waveform just used has
a time bandwidth product of 5. Recall that by preserving the same range resolution as the
rectangular waveform, the linear FM waveform achieves a Doppler resolution that is 5 times better.
As of the previous section, the Doppler resolution of the linear FM waveform is still fairly poor. One
way to improve this resolution is to further extend the pulse width. However, this approach will not
work for two reasons:
• The duty cycle of the waveform is already 50%, which is close to the practical limit. (Even if one
could, say, use a 100% duty cycle, it is still only a factor of 2 improvement, which is far from being
able to resolve the issue.)
• Longer pulse width means large minimum detectable range, which is also undesirable.
17-156
Waveform Analysis Using the Ambiguity Function
If one cannot extend the pulse width within one pulse, one has to look beyond this boundary. Indeed,
in modern radar systems, the Doppler processing often uses a coherent pulse train. The more pulses
in the pulse train, the finer the Doppler resolution.
release(lfmwaveform);
lfmwaveform.NumPulses = 5;
wav = lfmwaveform();
ambgfun(wav,lfmwaveform.SampleRate,lfmwaveform.PRF,'Cut','Doppler');
Notice that for the zero Doppler cut, the first null is still around 10 microseconds, so the range
resolution is the same. One should see immediately the presence of many range domain sidelobes.
These sidelobes are the tradeoff for using a pulse train. The distance between the mainlobe and the
first sidelobe is the length of one entire pulse, i.e., the reciprocal of the PRF. As one can see, this
value corresponds to the maximum unambiguous range.
T_max = 1/prf
T_max = 1.0000e-04
The zero delay cut also has sidelobes because of the pulse train. The distance between the mainlobe
and the first sidelobe is the PRF. Thus, this value is the maximum unambiguous Doppler the radar
system can detect. One can also calculate the corresponding maximum unambiguous speed.
17-157
17 Featured Examples
ambgfun(wav,lfmwaveform.SampleRate,lfmwaveform.PRF,'Cut','Delay');
V_max = dop2speed(lfmwaveform.PRF,c/fc)
V_max = 3000
However, notice that the mainlobe is now much sharper. Careful examination reveals that the first
null is at about 2 kHz. This Doppler resolution can actually be obtained by the following equation,
deltaf_train = lfmwaveform.PRF/5
deltaf_train = 2000
i.e., the resolution is now determined by the length of our entire pulse train, not the pulse width of a
single pulse. The corresponding speed resolution is now
deltav_train = dop2speed(deltaf_train,c/fc)
deltav_train = 600
which is significantly better. More importantly, to get even finer speed resolution, one can simply
increase the number of pulses included in the pulse train. Of course the number of pulses one can
have in a burst depends on whether one can preserve the coherence for the entire duration, but that
discussion is out of the scope of this example.
One may notice that in the zero delay cut, the distance between the peaks are no longer constant,
especially for farther out sidelobes. This lack of constancy occurs because the linear FM waveform's
ambiguity function is tilted. Hence, judging the separation of sidelobes in zero delay cut can be
17-158
Waveform Analysis Using the Ambiguity Function
misleading. The ambiguity caused by the pulse train is probably best viewed in the contoured form,
as the next code example shows. Notice that along the edge of the ambiguity function, those
sidelobes are indeed evenly spaced.
ambgfun(wav,lfmwaveform.SampleRate,lfmwaveform.PRF);
Because of all the sidelobes, this kind of ambiguity function is called bed of nails ambiguity function.
Stepped FM Waveform
The linear FM waveform is very widely used in radar systems. However, it does present some
challenges to the hardware. For one thing, the hardware has to be able to sweep the entire frequency
range in one pulse. Using this waveform also makes it harder to build the receiver because it has to
accommodate the entire bandwidth.
To avoid these issues, you can use a stepped FM waveform instead. A stepped FM waveform consists
of multiple contiguous CW pulses. Each pulse has a different frequency and together, all pulses
occupy the entire bandwidth. Hence, there is no more sweep within the pulse, and the receiver only
needs to accommodate the bandwidth that is the reciprocal of the pulse width of a single pulse.
stepfmwaveform =
phased.SteppedFMWaveform with properties:
17-159
17 Featured Examples
SampleRate: 200000
DurationSpecification: 'Pulse width'
PulseWidth: 5.0000e-05
PRF: 10000
PRFSelectionInputPort: false
FrequencyStep: 20000
NumSteps: 5
FrequencyOffsetSource: 'Property'
FrequencyOffset: 0
OutputFormat: 'Pulses'
NumPulses: 5
PRFOutputPort: false
CoefficientsOutputPort: false
wav = stepfmwaveform();
The zero Doppler cut, zero delay cut, and contour plot of the ambiguity function are shown below.
ambgfun(wav,stepfmwaveform.SampleRate,stepfmwaveform.PRF,'Cut','Doppler');
ambgfun(wav,stepfmwaveform.SampleRate,stepfmwaveform.PRF,'Cut','Delay');
17-160
Waveform Analysis Using the Ambiguity Function
ambgfun(wav,stepfmwaveform.SampleRate,stepfmwaveform.PRF);
17-161
17 Featured Examples
• The first null in delay is still at 10 microseconds, so the range resolution is preserved. Notice that
because each pulse is different, the sidelobes in the range domain disappear.
• The first null in Doppler is still at 2 kHz, so it has the same Doppler resolution as the 5-pulse
linear FM pulse train. The sidelobes in the Doppler domain still present as in the linear FM pulse
train case.
• The contour plot of the stepped FM waveform is also of type bed of nails. Although the
unambiguous range is greatly extended, the unambiguous Doppler is still confined by the
waveform's PRF.
Barker-Coded Waveform
Another important group of waveforms is phase-coded waveforms, among which the popularly used
ones are Barker codes, Frank codes, and Zadoff-Chu codes. In a phase-coded waveform, a pulse is
divided into multiple subpulses, often referred to as chips, and each chip is modulated with a given
phase. All phase-coded waveforms have good autocorrelation properties which make them good
candidates for pulse compression. Thus, if a phase-coded waveform is adopted, it could lower the
probability of interception as the energy is spread into chips. At the receiver, a properly configured
matched filter could suppress the noise and achieve good range resolution.
Barker code is probably the most well known phase-coded waveform. A Barker-coded waveform can
be constructed using the following command.
17-162
Waveform Analysis Using the Ambiguity Function
barkerwaveform = phased.PhaseCodedWaveform('Code','Barker','NumChips',7,...
'SampleRate', fs,'ChipWidth',1/bw,'PRF',prf)
barkerwaveform =
phased.PhaseCodedWaveform with properties:
SampleRate: 200000
Code: 'Barker'
ChipWidth: 1.0000e-05
NumChips: 7
PRF: 10000
PRFSelectionInputPort: false
FrequencyOffsetSource: 'Property'
FrequencyOffset: 0
OutputFormat: 'Pulses'
NumPulses: 1
PRFOutputPort: false
CoefficientsOutputPort: false
wav = barkerwaveform();
This Barker code consists of 7 chips. Its zero Doppler cut of the ambiguity function is given by
ambgfun(wav,barkerwaveform.SampleRate,barkerwaveform.PRF,'Cut','Doppler');
From the figure, one can see that the zero Doppler cut of a Barker code's ambiguity function has an
interesting property. All its sidelobes have the same height and are exactly 1/7 of the mainlobe. In
17-163
17 Featured Examples
fact, a length-N Barker code can provide a peak-to-peak suppression of N, which helps distinguish
closely located targets in range. This is the most important property of the Barker code. The range
resolution is approximately 10 microseconds, the same as the chip width.
There are two issues associated with a Barker code. First, there are only seven known Barker codes.
Their lengths are 2, 3, 4, 5, 7, 11 and 13. It is believed that there are no other Barker codes. Second,
the Doppler performance of the Barker code is fairly poor. Although the ambiguity function has good
shape at the zero Doppler cut, once there is some Doppler shift, the sidelobe level increases
significantly. The increase can be seen in the following contour plot.
ambgfun(wav,barkerwaveform.SampleRate,barkerwaveform.PRF);
Summary
This example compared several popular waveforms including the rectangular waveform, the linear
FM waveform, the stepped FM waveform and the Barker-coded waveform. It also showed how to use
the ambiguity function to analyze these waveforms and determine their resolution capabilities.
Reference
[1] Nadav Levanon and Eli Mozeson, Radar Signals, Wiley-IEEE Press, 2004.
[2] Mark Richards, Fundamentals of Radar Signal Processing, McGraw Hill, 2005.
17-164
Waveform Design to Improve Performance of an Existing Radar System
A monostatic pulse radar is designed in the example “Designing a Basic Monostatic Pulse Radar” on
page 17-449 to achieve the following goal:
load BasicMonostaticRadarExampleData;
Can the existing design be modified to achieve the new performance goal? To answer this question,
we need to recalculate the parameters affected by these new requirements.
The first affected parameter is the pulse repetition frequency (PRF). It needs to be recalculated based
on the new maximum unambiguous range.
prop_speed = radiator.PropagationSpeed;
max_range = 8000;
prf = prop_speed/(2*max_range);
Compared to the 30 kHz PRF of the existing design, the new PRF, 18.737 kHz, is smaller. Hence the
pulse interval is longer. Note that this is a trivial change in the radar software and is fairly cheap in
hardware cost.
waveform.PRF = prf;
Next, because the target is described using a Swerling case 2 model, we need to use Shnidman's
equation, instead of Albersheim's equation, to calculate the required SNR to achieve the designated
Pd and Pfa. Shnidman's equation assumes noncoherent integration and a square law detector. The
number of pulses to integrate is 10.
num_pulse_int = 10;
pfa = 1e-6;
snr_min = shnidman(0.9,pfa,num_pulse_int,2)
snr_min = 6.1583
17-165
17 Featured Examples
Waveform Selection
If we were to use the same rectangular waveform in the existing design, the pulse width would
remain the same because it is determined by the range resolution. However, because our maximum
range has increased from 5 km to 8 km and the target model switched from nonfluctuating to
Swerling case 2, we need to recalculate the required peak transmit power.
fc = radiator.OperatingFrequency;
lambda = prop_speed/fc;
peak_power = radareqpow(lambda,max_range,snr_min,waveform.PulseWidth,...
'RCS',1,'Gain',transmitter.Gain)
peak_power = 4.4821e+04
The peak power is roughly eight times larger than the previous requirement. This is no longer a
trivial modification because (1) the existing radar hardware is designed to produce a pulse with peak
power of about 5200 w. Although most designs will leave some margin above the required power, it is
unlikely that an existing system can accommodate eight times more power; (2) it is very expensive to
replace the hardware to output such high power. Therefore, the current design needs to be modified
to accommodate the new goal by using more sophisticated signal processing techniques.
Linear FM Waveform
One approach to reduce the power requirement is to use a waveform other than the rectangular
waveform. For example, a linear FM waveform can use a longer pulse than a rectangular waveform.
As a result, the required peak transmit power drops.
The desired range resolution determines the waveform bandwidth. For a linear FM waveform, the
bandwidth is equal to its sweep bandwidth. However, the pulse width is no longer restricted to the
reciprocal of the pulse bandwidth, so a much longer pulse width can be used. We use a pulse width
that is 20 times longer and set the sample rate to be twice the pulse bandwidth.
range_res = 50;
pulse_bw = prop_speed/(2*range_res);
pulse_width = 20/pulse_bw;
fs = 2*pulse_bw;
waveform = phased.LinearFMWaveform(...
'SweepBandwidth',pulse_bw,...
'PulseWidth',pulse_width,...
'PRF',prf,...
'SampleRate',fs);
We now determine the new required transmit power needed to achieve the design requirements
peak_power = radareqpow(lambda,max_range,snr_min,pulse_width,...
'RCS',1,'Gain',transmitter.Gain)
peak_power = 2.2411e+03
This transmit power is well within the capability of our existing radar system. We have achieved a
peak transmit power that can meet the new requirements without modifying the existing hardware.
transmitter.PeakPower = peak_power;
17-166
Waveform Design to Improve Performance of an Existing Radar System
System Simulation
Now that we have defined the radar to meet the design specifications, we setup the targets and the
environment to simulate the entire system.
Targets
As in the case with the aforementioned example, we assume that there are 3 targets in a free space
environment. However, now the target model is Swerling case 2, the target positions and the mean
radar cross sections are specified as follows:
We set the seed for generating the rcs in the targets so that we can reproduce the same results.
target.SeedSource = 'Property';
target.Seed = 2007;
Propagation Environments
We also set up the propagation channel between the radar and each target.
channel = phased.FreeSpace(...
'SampleRate',waveform.SampleRate,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
Signal Synthesis
We set the seed for the noise generation in the receiver so that we can reproduce the same results.
receiver.SeedSource = 'Property';
receiver.Seed = 2007;
fast_time_grid = unigrid(0,1/fs,1/prf,'[)');
slow_time_grid = (0:num_pulse_int-1)/prf;
for m = 1:num_pulse_int
17-167
17 Featured Examples
pulse = waveform();
[txsig,txstatus] = transmitter(pulse);
txsig = radiator(txsig,tgtang);
txsig = channel(txsig,sensorpos,tgtpos,sensorvel,tgtvel);
Range Detection
Detection Threshold
The detection threshold is calculated using the noise information, taking into consideration the pulse
integration. Note that in the loaded system, as outlined in the aforementioned example, the noise
bandwidth is half of the sample rate. We plot the threshold together with the first two pulses.
noise_bw = receiver.SampleRate/2;
npower = noisepow(noise_bw,...
receiver.NoiseFigure,receiver.ReferenceTemperature);
threshold = npower * db2pow(npwgnthresh(pfa,num_pulse_int,'noncoherent'));
pulseplotnum = 2;
helperRadarPulsePlot(rxpulses,threshold,...
fast_time_grid,slow_time_grid,pulseplotnum);
17-168
Waveform Design to Improve Performance of an Existing Radar System
The figure shows that the pulses are very wide which may result in poor range resolution. In addition,
the second and third targets are completely masked by the noise.
Matched Filter
As in the case of rectangular waveform, the received pulses are first passed through a matched filter
to improve the SNR. The matched filter offers a processing gain which further improves the detection
threshold. In addition, the added benefit of the matched filter of a linear FM waveform is that it
compresses the waveform in the time domain so that the filtered pulse becomes much narrower,
which translates to better range resolution.
matchingcoeff = getMatchedFilter(waveform);
matchedfilter = phased.MatchedFilter(...
'CoefficientsSource','Property',...
'Coefficients',matchingcoeff,...
'GainOutputPort',true);
[rxpulses, mfgain] = matchedfilter(rxpulses);
threshold = threshold * db2pow(mfgain);
matchingdelay = size(matchingcoeff,1)-1;
rxpulses = buffer(rxpulses(matchingdelay+1:end),size(rxpulses,1));
A time varying gain is then applied to the signal so that a constant threshold can be used across the
entire detectable range.
17-169
17 Featured Examples
range_gates = prop_speed*fast_time_grid/2;
lambda = prop_speed/fc;
tvg = phased.TimeVaryingGain(...
'RangeLoss',2*fspl(range_gates,lambda),...
'ReferenceLoss',2*fspl(max_range,lambda));
rxpulses = tvg(rxpulses);
Noncoherent Integration
We now integrate the receive pulses noncoherently to further improve the SNR. This operation is also
referred to as video integration.
rxpulses = pulsint(rxpulses,'noncoherent');
helperRadarPulsePlot(rxpulses,threshold,fast_time_grid,slow_time_grid,1);
After video integration, the data is ready for the final detection stage. From the plot above, we see
that there are no false alarms.
Range Detection
Finally, threshold detection is performed on the integrated pulses. The detection scheme identifies
the peaks and then translates their positions into the ranges of the targets.
[~,range_detect] = findpeaks(rxpulses,'MinPeakHeight',sqrt(threshold));
17-170
Waveform Design to Improve Performance of an Existing Radar System
true_range = round(tgtrng)
true_range = 1×3
range_estimates = round(range_gates(range_detect))
range_estimates = 1×3
Note that these range estimates are only accurate up to the range resolution that can be achieved by
the radar system, which is 50 m in this example.
Summary
In this example, we used the chirp waveform for range detection. By using the chirp waveform, we
were able to reduce the required peak transmit power, thus achieving a larger detectable range of
8km for Swerling case 2 targets.
17-171
17 Featured Examples
This example shows two types of time domain beamformers: the time delay beamformer and the Frost
beamformer. It illustrates how one can use diagonal loading to improve the robustness of the Frost
beamformer. You can listen to the speech signals at each processing step if your system has sound
support.
First, we define a uniform linear array (ULA) to receive the signal. The array contains 10
omnidirectional microphones and the element spacing is 5 cm.
microphone = ...
phased.OmnidirectionalMicrophoneElement('FrequencyRange',[20 20e3]);
Nele = 10;
ula = phased.ULA(Nele,0.05,'Element',microphone);
c = 340; % sound speed, in m/s
Next, we simulate the multichannel signals received by the microphone array. We begin by loading
two recorded speeches and one laughter recording. We also load the laughter audio segment as
interference. The sampling frequency of the audio signals is 8 kHz.
Because the audio signal is usually large, it is often not practical to read the entire signal into the
memory. Therefore, in this example, we will simulate and process the signal in a streaming fashion,
i.e., breaking the signal into small blocks at the input, processing each block, and then assembling
them at the output.
The incident direction of the first speech signal is -30 degrees in azimuth and 0 degrees in elevation.
The direction of the second speech signal is -10 degrees in azimuth and 10 degrees in elevation. The
interference comes from 20 degrees in azimuth and 0 degrees in elevation.
Now we can use a wideband collector to simulate a 3-second multichannel signal received by the
array. Notice that this approach assumes that each input single-channel signal is received at the
origin of the array by a single microphone.
fs = 8000;
collector = phased.WidebandCollector('Sensor',ula,'PropagationSpeed',c,...
'SampleRate',fs,'NumSubbands',1000,'ModulatedInput', false);
t_duration = 3; % 3 seconds
t = 0:1/fs:t_duration-1/fs;
17-172
Acoustic Beamforming Using a Microphone Array
We generate a white noise signal with a power of 1e-4 watts to represent the thermal noise for each
sensor. A local random number stream ensures reproducible results.
prevS = rng(2008);
noisePwr = 1e-4; % noise power
We now start the simulation. At the output, the received signal is stored in a 10-column matrix. Each
column of the matrix represents the signal collected by one microphone. Note that we are also
playing back the audio using the streaming approach during the simulation.
% preallocate
NSampPerFrame = 1000;
NTSample = t_duration*fs;
sigArray = zeros(NTSample,Nele);
voice_dft = zeros(NTSample,1);
voice_cleanspeech = zeros(NTSample,1);
voice_laugh = zeros(NTSample,1);
dftFileReader = dsp.AudioFileReader('dft_voice_8kHz.wav',...
'SamplesPerFrame',NSampPerFrame);
speechFileReader = dsp.AudioFileReader('cleanspeech_voice_8kHz.wav',...
'SamplesPerFrame',NSampPerFrame);
laughterFileReader = dsp.AudioFileReader('laughter_8kHz.wav',...
'SamplesPerFrame',NSampPerFrame);
% simulate
for m = 1:NSampPerFrame:NTSample
sig_idx = m:m+NSampPerFrame-1;
x1 = dftFileReader();
x2 = speechFileReader();
x3 = 2*laughterFileReader();
temp = collector([x1 x2 x3],...
[ang_dft ang_cleanspeech ang_laughter]) + ...
sqrt(noisePwr)*randn(NSampPerFrame,Nele);
if isAudioSupported
play(audioWriter,0.5*temp(:,3));
end
sigArray(sig_idx,:) = temp;
voice_dft(sig_idx) = x1;
voice_cleanspeech(sig_idx) = x2;
voice_laugh(sig_idx) = x3;
end
Notice that the laughter masks the speech signals, rendering them unintelligible. We can plot the
signal in channel 3 as follows:
plot(t,sigArray(:,3));
xlabel('Time (sec)'); ylabel ('Amplitude (V)');
title('Signal Received at Channel 3'); ylim([-3 3]);
17-173
17 Featured Examples
The time delay beamformer compensates for the arrival time differences across the array for a signal
coming from a specific direction. The time aligned multichannel signals are coherently averaged to
improve the signal-to-noise ratio (SNR). Now, define a steering angle corresponding to the incident
direction of the first speech signal and construct a time delay beamformer.
angSteer = ang_dft;
beamformer = phased.TimeDelayBeamformer('SensorArray',ula,...
'SampleRate',fs,'Direction',angSteer,'PropagationSpeed',c)
beamformer =
phased.TimeDelayBeamformer with properties:
Next, we process the synthesized signal, plot and listen to the output of the conventional beamformer.
Again, we play back the beamformed audio signal during the processing.
signalsource = dsp.SignalSource('Signal',sigArray,...
'SamplesPerFrame',NSampPerFrame);
17-174
Acoustic Beamforming Using a Microphone Array
cbfOut = zeros(NTSample,1);
for m = 1:NSampPerFrame:NTSample
temp = beamformer(signalsource());
if isAudioSupported
play(audioWriter,temp);
end
cbfOut(m:m+NSampPerFrame-1,:) = temp;
end
plot(t,cbfOut);
xlabel('Time (Sec)'); ylabel ('Amplitude (V)');
title('Time Delay Beamformer Output'); ylim([-3 3]);
One can measure the speech enhancement by the array gain, which is the ratio of output signal-to-
interference-plus-noise ratio (SINR) to input SINR.
agCbf = pow2db(mean((voice_cleanspeech+voice_laugh).^2+noisePwr)/...
mean((cbfOut - voice_dft).^2))
agCbf = 9.5022
The first speech signal begins to emerge in the time delay beamformer output. We obtain an SINR
improvement of 9.4 dB. However, the background laughter is still comparable to the speech. To
obtain better beamformer performance, use a Frost beamformer.
17-175
17 Featured Examples
By attaching FIR filters to each sensor, the Frost beamformer has more beamforming weights to
suppress the interference. It is an adaptive algorithm that places nulls at learned interference
directions to better suppress the interference. In the steering direction, the Frost beamformer uses
distortionless constraints to ensure desired signals are not suppressed. Let us create a Frost
beamformer with a 20-tap FIR after each sensor.
frostbeamformer = ...
phased.FrostBeamformer('SensorArray',ula,'SampleRate',fs,...
'PropagationSpeed',c,'FilterLength',20,'DirectionSource','Input port');
reset(signalsource);
FrostOut = zeros(NTSample,1);
for m = 1:NSampPerFrame:NTSample
FrostOut(m:m+NSampPerFrame-1,:) = ...
frostbeamformer(signalsource(),ang_dft);
end
We can play and plot the entire audio signal once it is processed.
if isAudioSupported
play(audioWriter,FrostOut);
end
plot(t,FrostOut);
xlabel('Time (sec)'); ylabel ('Amplitude (V)');
title('Frost Beamformer Output'); ylim([-3 3]);
17-176
Acoustic Beamforming Using a Microphone Array
agFrost = 14.4385
Notice that the interference is now canceled. The Frost beamformer has an array gain of 14 dB,
which is 4.5 dB higher than that of the time delay beamformer. The performance improvement is
impressive, but has a high computational cost. In the preceding example, an FIR filter of order 20 is
used for each microphone. With all 10 sensors, one needs to invert a 200-by-200 matrix, which may
be expensive in real-time processing.
Next, we want to steer the array in the direction of the second speech signal. Suppose we do not
know the exact direction of the second speech signal except a rough estimate of azimuth -5 degrees
and elevation 5 degrees.
release(frostbeamformer);
ang_cleanspeech_est = [-5; 5]; % Estimated steering direction
reset(signalsource);
FrostOut2 = zeros(NTSample,1);
for m = 1:NSampPerFrame:NTSample
FrostOut2(m:m+NSampPerFrame-1,:) = frostbeamformer(signalsource(),...
ang_cleanspeech_est);
17-177
17 Featured Examples
end
if isAudioSupported
play(audioWriter,FrostOut2);
end
plot(t,FrostOut2);
xlabel('Time (sec)'); ylabel ('Amplitude (V)');
title('Frost Beamformer Output'); ylim([-3 3]);
agFrost2 = 6.1927
The speech is barely audible. Despite the 6.1 dB gain from the beamformer, performance suffers from
the inaccurate steering direction. One way to improve the robustness of the Frost beamformer is to
use diagonal loading. This approach adds a small quantity to the diagonal elements of the estimated
covariance matrix. Here we use a diagonal value of 1e-3.
% Specify diagonal loading value
release(frostbeamformer);
frostbeamformer.DiagonalLoadingFactor = 1e-3;
reset(signalsource);
FrostOut2_dl = zeros(NTSample,1);
17-178
Acoustic Beamforming Using a Microphone Array
for m = 1:NSampPerFrame:NTSample
FrostOut2_dl(m:m+NSampPerFrame-1,:) = ...
frostbeamformer(signalsource(),ang_cleanspeech_est);
end
if isAudioSupported
play(audioWriter,FrostOut2_dl);
end
plot(t,FrostOut2_dl);
xlabel('Time (sec)'); ylabel ('Amplitude (V)');
title('Frost Beamformer Output'); ylim([-3 3]);
agFrost2_dl = 6.4788
Now the output speech signal is improved and we obtain a 0.3 dB gain improvement from the
diagonal loading technique.
release(frostbeamformer);
release(signalsource);
if isAudioSupported
pause(3); % flush out AudioPlayer buffer
release(audioWriter);
17-179
17 Featured Examples
end
rng(prevS);
Summary
This example shows how to use time domain beamformers to retrieve speech signals from noisy
microphone array measurements. The example also shows how to simulate an interference-dominant
signal received by a microphone array. The example used both time delay and the Frost beamformers
and compared their performance. The Frost beamformer has a better interference suppression
capability. The example also illustrates the use of diagonal loading to improve the robustness of the
Frost beamformer.
Reference
[1] O. L. Frost III, An algorithm for linear constrained adaptive array processing, Proceedings of the
IEEE, Vol. 60, Number 8, Aug. 1972, pp. 925-935.
17-180
Beamforming for MIMO-OFDM Systems
Introduction
The term MIMO is used to describe a system where multiple transmitters or multiple receivers are
present. In practice the system can take many different forms, such as single-input-multiple-output
(SIMO) or multiple-input-single-output (MISO) system. This example illustrates a downlink MISO
system. An 8-element ULA is deployed at the base station as the transmitter while the mobile unit is
the receiver with a single antenna.
The rest of the system is configured as follows. The transmitter power is 8 watts and the transmit
gain is -8 dB. The mobile receiver is stationary and located at 2750 meters away, and is 3 degrees off
the transmitter's boresight. An interferer with a power of 1 watt and a gain of -20 dB is located at
9000 meters, 20 degrees off the transmitter's boresight.
% Tunable parameters
tp.txPower = 9; % watt
tp.txGain = -8; % dB
tp.mobileRange = 2750; % m
tp.mobileAngle = 3; % degrees
tp.interfPower = 1; % watt
tp.interfGain = -20; % dB
tp.interfRange = 9000; % m
tp.interfAngle = 20; % degrees
tp.numTXElements = 8;
tp.steeringAngle = 0; % degrees
tp.rxGain = 108.8320 - tp.txGain; % dB
numTx= tp.numTXElements;
helperPlotMIMOEnvironment(gc, tp);
17-181
17 Featured Examples
Signal Transmission
[encoder,scrambler,modulatorOFDM,steeringvec,transmitter,...
radiator,pilots,numDataSymbols,frmSz] = helperMIMOTxSetup(gc,tp);
There are many components in the transmitter subsystem, such as the convolutional encoder, the
scrambler, the QAM modulator, the OFDM modulator, and so on. The message is first converted to an
information bit stream and then passed through source coding and modulation stages to prepare for
the radiation.
In an OFDM system, the data is carried by multiple sub-carriers that are orthogonal to each other.
Then, the data stream is duplicated to all radiating elements in the transmitting array
In a MIMO system, it is also possible to separate multiple users spatial division multiplexing (SDMA).
In these situations, the data stream is often modulated by a weight corresponding to the desired
direction so that once radiated, the signal is maximized in that direction. Because in a MIMO channel,
the signal radiated from different elements in an array may go through different propagation
17-182
Beamforming for MIMO-OFDM Systems
environments, the signal radiated from each antenna should be propagated individually. This can be
achieved by setting CombineRadiatedSignals to false on the phased.Radiator component.
radiator.CombineRadiatedSignals = false;
To achieve precoding, the data stream radiated from each antenna in the array is modulated by a
phase shift corresponding to its radiating direction. The goal of this precoding is to ensure these data
streams add in phase if the array is steered toward that direction. Precoding can be specified as
weights used at the radiator.
wR = steeringvec(gc.fc,[-tp.mobileAngle;0]);
Meanwhile, the array is also steered toward a given steering angle, so the total weights are a
combination of both precoding and the steering weights.
wT = steeringvec(gc.fc,[tp.steeringAngle;0]);
weight = wT.* wR;
txOFDM = radiator(txOFDM,repmat([tp.mobileAngle;0],1,numTx),conj(weight));
Note that the transmitted signal, txOFDM, is a matrix whose columns represent data streams
radiated from the corresponding elements in the transmit array.
Signal Propagation
Next, the signal propagates through a MIMO channel. In general, there are two propagation effects
on the received signal strength that are of interest: one of them is the spreading loss due to the
propagation distance, often termed as the free space path loss; and the other is the fading due to
multipath. This example models both effects.
[channel,interferenceTransmitter,toRxAng,spLoss] = ...
helperMIMOEnvSetup(gc,tp);
[sigFade, chPathG] = channel(txOFDM);
sigLoss = sigFade/sqrt(db2pow(spLoss(1)));
To simulate a more realistic mobile environment, next section also inserts an interference source.
Note that in a wireless communication system, the interference is often a different mobile user.
Signal Reception
The receiving antenna collects both the propagated signal as well as the interference and passes
them to the receiver to recover the original information embedded in the signal. Just like the transmit
end of the system, the receiver used in a MIMO-OFDM system also contains many stages, including
OFDM demodulator, QAM demodulator, descrambler, equalizer, and Viterbi decoder.
[collector,receiver,demodulatorOFDM,descrambler,decoder] = ...
helperMIMORxSetup(gc,tp,numDataSymbols);
17-183
17 Featured Examples
% OFDM Demodulation
rxOFDM = demodulatorOFDM(rxOFDM);
% Channel estimation
hD = helperIdealChannelEstimation(gc, numDataSymbols, chPathG);
% Equalization
rxEq = helperEqualizer(rxOFDM, hD, numTx);
rxBitsS = qamdemod(rxSymbs,gc.modMode,'UnitAveragePower',true,...
'OutputType','bit');
rxCoded = descrambler(rxBitsS);
rxDeCoded = decoder(rxCoded);
rxBits = rxDeCoded(1:frmSz);
A comparison of the decoded output with the original message stream suggests that the resulting
BER is too high for a communication system. The constellation diagram is also shown below
ber = comm.ErrorRate;
measures = ber(txBits, rxBits);
fprintf('BER = %.2f%%; No. of Bits = %d; No. of errors = %d\n', ...
measures(1)*100,measures(3), measures(2));
17-184
Beamforming for MIMO-OFDM Systems
The high BER is mainly due to the mobile being off the steering direction of the base station array. If
the mobile is aligned with the steering direction, the BER is greatly improved.
tp.steeringAngle = tp.mobileAngle;
reset(ber);
measures = ber(txBits, rxBits);
fprintf('BER = %.2f%%; No. of Bits = %d; No. of errors = %d\n', ...
measures(1)*100,measures(3), measures(2));
constdiag(rxSymbs);
17-185
17 Featured Examples
Therefore, the system is very sensitive to the steering error. On the other hand, it is this kind of
spatial sensitivity makes SDMA possible to distinguish multiple users in space.
The discussion so far assumes that the beam can be steered toward the exact desired direction. In
reality, however, this is often not true, especially when the analog phase shifters are used. Analog
phase shifters have only limited precision and are categorized by the number of bits used in phase
shifts. For example, a 3-bit phase shifter can only represent 8 different angles within 360 degrees.
Thus, if such quantization is included in the simulation, the system performance degrades, which can
be observed from the constellation plot.
% analog phase shifter with quantization effect
release(steeringvec);
steeringvec.NumPhaseShifterBits = 4;
wTq = steeringvec(gc.fc,[tp.steeringAngle;0]);
reset(ber);
measures = ber(txBits, rxBits);
fprintf('BER = %.2f%%; No. of Bits = %d; No. of errors = %d\n', ...
measures(1)*100,measures(3), measures(2));
17-186
Beamforming for MIMO-OFDM Systems
Summary
This example shows a system level simulation of a point-to-point MIMO-OFDM system employing
beamforming. The simulation models many system components such as encoding, transmit
beamforming, precoding, multipath fading, channel estimation, equalization, and decoding.
Reference
[2] Theodore S. Rappaport et al. Millimeter Wave Wireless Communications, Prentice Hall, 2014
See Also
phased.Radiator
17-187
17 Featured Examples
First, we define the incoming signal. The signal's baseband representation is a simple rectangular
pulse as defined below:
For this example, we also assume that the signal's carrier frequency is 100 MHz.
carrierFreq = 100e6;
wavelength = physconst('LightSpeed')/carrierFreq;
We now define the uniform linear array (ULA) used to receive the signal. The array contains 10
isotropic antennas. The element spacing is half of the incoming wave's wavelength.
17-188
Conventional and Adaptive Beamformers
ula = phased.ULA('NumElements',10,'ElementSpacing',wavelength/2);
ula.Element.FrequencyRange = [90e5 110e6];
Then we use the collectPlaneWave method of the array object to simulate the received signal at the
array. Assume the signal arrives at the array from 45 degrees in azimuth and 0 degrees in elevation,
the received signal can be modeled as
The received signal often includes some thermal noise. The noise can be modeled as complex,
Gaussian distributed random numbers. In this example, we assume that the noise power is 0.5 watts,
which corresponds to a 3 dB signal-to-noise ratio (SNR) at each antenna element.
% Create and reset a local random number generator so the result is the
% same every time.
rs = RandStream.create('mt19937ar','Seed',2008);
The total return is the received signal plus the thermal noise.
rxSignal = x + noise;
The total return has ten columns, where each column corresponds to one antenna element. The plot
below shows the magnitude plot of the signal for the first two channels
subplot(211);
plot(t,abs(rxSignal(:,1)));axis tight;
title('Pulse at Antenna 1');xlabel('Time (s)');ylabel('Magnitude (V)');
subplot(212);
plot(t,abs(rxSignal(:,2)));axis tight;
title('Pulse at Antenna 2');xlabel('Time (s)');ylabel('Magnitude (V)');
17-189
17 Featured Examples
A beamformer can be considered a spatial filter that suppresses the signal from all directions, except
the desired ones. A conventional beamformer simply delays the received signal at each antenna so
that the signals are aligned as if they arrive at all the antennas at the same time. In the narrowband
case, this is equivalent to multiplying the signal received at each antenna by a phase factor. To define
a phase shift beamformer pointing to the signal's incoming direction, we use
psbeamformer = phased.PhaseShiftBeamformer('SensorArray',ula,...
'OperatingFrequency',carrierFreq,'Direction',inputAngle,...
'WeightsOutputPort', true);
We can now obtain the output signal and weighting coefficients from the beamformer.
[yCbf,w] = psbeamformer(rxSignal);
% Plot the output
clf;
plot(t,abs(yCbf)); axis tight;
title('Output of Phase Shift Beamformer');
xlabel('Time (s)');ylabel('Magnitude (V)');
17-190
Conventional and Adaptive Beamformers
From the figure, we can see that the signal becomes much stronger compared to the noise. The
output SNR is approximately 10 times stronger than that of the received signal on a single antenna,
because a 10-element array produces an array gain of 10.
To see the beam pattern of the beamformer, we plot the array response along 0 degrees elevation
with the weights applied. Since the array is a ULA with isotropic elements, it has ambiguity in front
and back of the array. Therefore, we only plot between -90 and 90 degrees in azimuth.
17-191
17 Featured Examples
You can see that the main beam of the beamformer is pointing in the desired direction (45 degrees),
as expected.
Next, we use the beamformer to enhance the received signal under interference conditions. In the
presence of strong interference, the target signal may be masked by the interference signal. For
example, interference from a nearby radio tower can blind the antenna array in that direction. If the
radio signal is strong enough, it can blind the radar in multiple directions, especially when the
desired signal is received by a sidelobe. Such scenarios are very challenging for a phase shift
beamformer, and therefore, adaptive beamformers are introduced to address this problem.
We model two interference signals arriving from 30 degrees and 50 degrees in azimuth. The
interference amplitudes are much higher than the desired signal shown in the previous scenario.
nSamp = length(t);
s1 = 10*randn(rs,nSamp,1);
s2 = 10*randn(rs,nSamp,1);
% interference at 30 degrees and 50 degrees
interference = collectPlaneWave(ula,[s1 s2],[30 50; 0 0],carrierFreq);
To illustrate the effect of interference, we'll reduce the noise level to a minimal level. For the rest of
the example, let us assume a high SNR value of 50dB at each antenna. We'll see that even though
there is almost no noise, the interference alone can make a phase shift beamformer fail.
noisePwr = 0.00001; % noise power, 50dB SNR
noise = sqrt(noisePwr/2)*(randn(rs,size(x))+1i*randn(rs,size(x)));
17-192
Conventional and Adaptive Beamformers
First, we'll try to apply the phase shift beamformer to retrieve the signal along the incoming
direction.
yCbf = psbeamformer(rxSignal);
From the figure, we can see that, because the interference signals are much stronger than the target
signal, we cannot extract the signal content.
MVDR Beamformer
To overcome the interference problem, we can use the MVDR beamformer, a popular adaptive
beamformer. The MVDR beamformer preserves the signal arriving along a desired direction, while
trying to suppress signals coming from other directions. In this case, the desired signal is at the
direction 45 degrees in azimuth.
17-193
17 Featured Examples
When we have access to target-free data, we can provide such information to the MVDR beamformer
by setting the TrainingInputPort property to true.
mvdrbeamformer.TrainingInputPort = true;
We apply the MVDR beamformer to the received signal. The plot shows the MVDR beamformer
output signal. You can see that the target signal can now be recovered.
Looking at the response pattern of the beamformer, we see two deep nulls along the interference
directions, (30 and 50 degrees). The beamformer also has a gain of 0 dB along the target direction of
45 degrees. Thus, the MVDR beamformer preserves the target signal and suppresses the interference
signals.
pattern(ula,carrierFreq,-180:180,0,'Weights',wMVDR,'Type','powerdb',...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'CoordinateSystem','rectangular');
axis([-90 90 -80 20]);
17-194
Conventional and Adaptive Beamformers
'Type','powerdb','CoordinateSystem','rectangular');
hold off;
legend('MVDR','PhaseShift')
Also shown in the figure is the response pattern from PhaseShift. We can see that the PhaseShift
pattern does not null out the interference at all.
On many occasions, we may not be able to separate the interference from the target signal, and
therefore, the MVDR beamformer has to calculate weights using data that includes the target signal.
In this case, if the target signal is received along a direction slightly different from the desired one,
the MVDR beamformer suppresses it. This occurs because the MVDR beamformer treats all the
signals, except the one along the desired direction, as undesired interferences. This effect is
sometimes referred to as "signal self nulling".
To illustrate this self nulling effect, we define an MVDR beamformer and set the TrainingInputPort
property to false.
mvdrbeamformer_selfnull = phased.MVDRBeamformer('SensorArray',ula,...
'Direction',inputAngle,'OperatingFrequency',carrierFreq,...
'WeightsOutputPort',true,'TrainingInputPort',false);
We then create a direction mismatch between the incoming signal direction and the desired direction.
Recall that the signal is impinging from 45 degrees in azimuth. If, with some a priori information, we
expect the signal to be arriving from 43 degrees in azimuth, then we use 43 degrees in azimuth as the
17-195
17 Featured Examples
desired direction in the MVDR beamformer. However, since the real signal is arriving at 45 degrees in
azimuth, there is a slight mismatch in the signal direction.
When we apply the MVDR beamformer to the received signal, we see that the receiver cannot
differentiate the target signal and the interference.
When we look at the beamformer response pattern, we see that the MVDR beamformer tries to
suppress the signal arriving along 45 degrees because it is treated like an interference signal. The
MVDR beamformer is very sensitive to signal-steering vector mismatch, especially when we cannot
provide interference information.
pattern(ula,carrierFreq,-180:180,0,'Weights',wSn,'Type','powerdb',...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'CoordinateSystem','rectangular');
axis([-90 90 -40 25]);
17-196
Conventional and Adaptive Beamformers
LCMV Beamformer
To prevent signal self-nulling, we can use an LCMV beamformer, which allows us to put multiple
constraints along the target direction (steering vector). It reduces the chance that the target signal
will be suppressed when it arrives at a slightly different angle from the desired direction. First we
create an LCMV beamformer:
lcmvbeamformer = phased.LCMVBeamformer('WeightsOutputPort',true);
Now we need to create several constraints. To specify a constraint, we put corresponding entries in
both the constraint matrix, Constraint, and the desired response vector, DesiredResponse. Each
column in Constraint is a set of weights we can apply to the array and the corresponding entry in
DesiredResponse is the desired response we want to achieve when the weights are applied. For
example, to avoid self nulling in this example, we may want to add the following constraints to the
beamformer:
• Preserve the incoming signal from the expected direction (43 degrees in azimuth).
• To avoid self nulling, ensure that the response of the beamformer will not decline at +/- 2 degrees
of the expected direction.
For all the constraints, the weights are given by the steering vectors that steer the array toward
those directions:
steeringvec = phased.SteeringVector('SensorArray',ula);
stv = steeringvec(carrierFreq,[43 41 45]);
17-197
17 Featured Examples
The desired responses should be 1 for all three directions. The Constraint matrix and
DesiredResponse are given by:
lcmvbeamformer.Constraint = stv;
lcmvbeamformer.DesiredResponse = [1; 1; 1];
Then we apply the beamformer to the received signal. The plot below shows that the target signal can
be detected again even though there is the mismatch between the desired and the true signal
arriving direction.
[yLCMV,wLCMV] = lcmvbeamformer(rxSignal);
The LCMV response pattern shows that the beamformer puts the constraints along the specified
directions, while nulling the interference signals along 30 and 50 degrees. Here we only show the
pattern between 0 and 90 degrees in azimuth so that we can see the behavior of the response pattern
at the signal and interference directions better.
pattern(ula,carrierFreq,-180:180,0,'Weights',wLCMV,'Type','powerdb',...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'CoordinateSystem','rectangular');
axis([0 90 -40 35]);
17-198
Conventional and Adaptive Beamformers
pattern(ula,carrierFreq,-180:180,0,'Weights',wSn,...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'Type','powerdb','CoordinateSystem','rectangular');
hold off;
legend('LCMV','MVDR');
The effect of constraints can be better seen when comparing the LCMV beamformer's response
pattern to the MVDR beamformer's response pattern. Notice how the LCMV beamformer is able to
maintain a flat response region around the 45 degrees in azimuth, while the MVDR beamformer
creates a null.
2D Array Beamforming
In this section, we illustrate the use of a beamformer with a uniform rectangular array (URA). The
beamformer can be applied to a URA in the same way as to the ULA. We only illustrate the MVDR
beamformer for the URA in this example. The usages of other algorithms are similar.
First, we define a URA. The URA consists of 10 rows and 5 columns of isotropic antenna elements.
The spacing between the rows and the columns are 0.4 and 0.5 wavelength, respectively.
colSp = 0.5*wavelength;
rowSp = 0.4*wavelength;
ura = phased.URA('Size',[10 5],'ElementSpacing',[rowSp colSp]);
ura.Element.FrequencyRange = [90e5 110e6];
17-199
17 Featured Examples
Consider the same source signal as was used in the previous sections. The source signal arrives at
the URA from 45 degrees in azimuth and 0 degrees in elevation. The received signal, including the
noise at the array, can be modeled as
x = collectPlaneWave(ura,s,inputAngle,carrierFreq);
noise = sqrt(noisePwr/2)*(randn(rs,size(x))+1i*randn(rs,size(x)));
Unlike a ULA, which can only differentiate the angles in azimuth direction, a URA can also
differentiate angles in elevation direction. Therefore, we specify two interference signals arriving
along the directions [30;10] and [50;-5] degrees, respectively.
s1 = 10*randn(rs,nSamp,1);
s2 = 10*randn(rs,nSamp,1);
mvdrbeamformer = phased.MVDRBeamformer('SensorArray',ura,...
'Direction',inputAngle,'OperatingFrequency',carrierFreq,...
'TrainingInputPort',true,'WeightsOutputPort',true);
Finally, we apply the MVDR beamformer to the received signal and plot its output.
[yURA,w]= mvdrbeamformer(rxSignal,rxInt);
17-200
Conventional and Adaptive Beamformers
To see clearly that the beamformer puts the nulls along the interference directions, we'll plot the
beamformer response pattern of the array along -5 degrees and 10 degrees in elevation, respectively.
The figure shows that the beamformer suppresses the interference signals along [30 10] and [50 -5]
directions.
subplot(2,1,1);
pattern(ura,carrierFreq,-180:180,-5,'Weights',w,'Type','powerdb',...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'CoordinateSystem','rectangular');
title('Response Pattern at -5 Degrees Elevation');
axis([-90 90 -60 -5]);
subplot(2,1,2);
pattern(ura,carrierFreq,-180:180,10,'Weights',w,'Type','powerdb',...
'PropagationSpeed',physconst('LightSpeed'),'Normalize',false,...
'CoordinateSystem','rectangular');
title('Response Pattern at 10 Degrees Elevation');
axis([-90 90 -60 -5]);
17-201
17 Featured Examples
Summary
In this example, we illustrated how to use a beamformer to retrieve the signal from a particular
direction using a ULA or a URA. The choice of the beamformer depends on the operating
environment. Adaptive beamformers provides superior interference rejection compared to that
offered by conventional beamformers. When the knowledge about the target direction is not accurate,
the LCMV beamformer is preferable because it prevents signal self-nulling.
17-202
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
First, model a uniform linear array (ULA) containing 10 isotropic antennas spaced 0.5 meters apart.
ula = phased.ULA('NumElements',10,'ElementSpacing',0.5);
Assume that two narrowband signals impinge on the array. The first signal arrives from 40° in
azimuth and 0° in elevation, while the second signal arrives from -20° in azimuth and 0° in elevation.
The operating frequency of the system is 300 MHz.
c = physconst('LightSpeed');
fc = 300e6; % Operating frequency
lambda = c/fc;
pos = getElementPosition(ula)/lambda;
Nsamp = 1000;
Also, assume that the thermal noise power at each antenna is 0.01 Watts.
nPower = 0.01;
rs = rng(2007);
signal = sensorsig(pos,Nsamp,angs,nPower);
We want to estimate the two DOAs using the received signal. Because the signal is received by a ULA,
which is symmetric around its axis, we cannot obtain both azimuth and elevation at the same time.
Instead, we can estimate the broadside angle, which is measured from the broadside of the ULA. The
relationship between these angles is illustrated in the following figure:
17-203
17 Featured Examples
broadsideAngle = az2broadside(angs(1,:),angs(2,:))
broadsideAngle =
40.0000 -20.0000
We can see that the two broadside angles are the same as the azimuth angles. In general, when the
elevation angle is zero and the azimuth angle is within [-90 90], the broadside angle is the same as
the azimuth angle. In the following we only perform the conversion when they are not equal.
The beamscan algorithm scans a conventional beam through a predefined scan region. Here we set
the scanning region to [-90 90] to cover all 180 degrees.
spatialspectrum = phased.BeamscanEstimator('SensorArray',ula,...
'OperatingFrequency',fc,'ScanAngles',-90:90);
By default, the beamscan estimator only produces a spatial spectrum across the scan region. Set the
DOAOutputPort property to true to obtain DOA estimates. Set the NumSignals property to 2 to find
the locations of the top two peaks.
spatialspectrum.DOAOutputPort = true;
spatialspectrum.NumSignals = 2;
17-204
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
We now obtain the spatial spectrum and the DOAs. The estimated DOAs show the correct values,
which are 40° and -20°.
[~,ang] = spatialspectrum(signal)
ang =
40 -20
plotSpectrum(spatialspectrum);
The conventional beam cannot resolve two closely-spaced signals. When two signals arrive from
directions separated by less than the beamwidth, beamscan will fail to estimate the directions of the
signals. To illustrate this limitation, we simulate two received signals from 30° and 40° in azimuth.
[~,ang] = spatialspectrum(signal)
ang =
17-205
17 Featured Examples
35 71
The results differ from the true azimuth angles. Let's take a look at the output spectrum.
plotSpectrum(spatialspectrum);
The output spatial spectrum has only one dominant peak. Therefore, it cannot resolve these two
closely-spaced signals. When we try to estimate the DOA from the peaks of the beamscan output, we
get incorrect estimates. The beamscan object returns two maximum peaks as the estimated DOAs no
matter how different the peaks are. In this case, the beamscan returns the small peak at 71° as the
second estimate.
To resolve closely-spaced signals, we can use the minimum variance distortionless response (MVDR)
algorithm or the multiple signal classification (MUSIC) algorithm. First, we will examine the MVDR
estimator, which scans an MVDR beam over the specified region. Because an MVDR beam has a
smaller beamwidth, it has higher resolution.
mvdrspatialspect = phased.MVDREstimator('SensorArray',ula,...
'OperatingFrequency',fc,'ScanAngles',-90:90,...
'DOAOutputPort',true,'NumSignals',2);
[~,ang] = mvdrspatialspect(signal)
plotSpectrum(mvdrspatialspect);
ang =
17-206
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
30 40
The MVDR algorithm correctly estimates the DOAs that are unresolvable by beamscan. The improved
resolution comes with a price. The MVDR is more sensitive to sensor position errors. In
circumstances where sensor positions are inaccurate, MVDR could produce a worse spatial spectrum
than beamscan. Moreover, if we further reduce the difference of two signal directions to a level that
is smaller than the beamwidth of an MVDR beam, the MVDR estimator will also fail.
The MUSIC algorithm can also be used to resolve these closely-spaced signals. Estimate the
directions of arrival of the two sources and compare the spatial spectrum of MVDR to the spatial
spectrum of MUSIC.
musicspatialspect = phased.MUSICEstimator('SensorArray',ula,...
'OperatingFrequency',fc,'ScanAngles',-90:90,...
'DOAOutputPort',true,'NumSignalsSource','Property','NumSignals',2);
[~,ang] = musicspatialspect(signal)
ymvdr = mvdrspatialspect(signal);
ymusic = musicspatialspect(signal);
helperPlotDOASpectra(mvdrspatialspect.ScanAngles,...
musicspatialspect.ScanAngles,ymvdr,ymusic,'ULA')
ang =
17-207
17 Featured Examples
30 40
The directions of arrival using MUSIC are correct, and MUSIC provides even better spatial resolution
than MVDR. MUSIC, like MVDR, is sensitive to sensor position errors. In addition, the number of
sources must be known or accurately estimated. When the number of sources specified is incorrect,
MVDR and Beamscan may simply return insignificant peaks from the correct spatial spectrum. In
contrast, the MUSIC spatial spectrum itself may be inaccurate when the number of sources is not
specified correctly. In addition, the amplitudes of MUSIC spectral peaks cannot be interpreted as the
power of the sources.
For a ULA, additional high resolution algorithms can further exploit the special geometry of the ULA.
See “High Resolution Direction of Arrival Estimation” on page 17-214.
Although we can only estimate broadside angles using a ULA, we can convert the estimated
broadside angles to azimuth angles if we know their incoming elevations. We now model two signals
coming from 35° in elevation and estimate their corresponding broadside angles.
ang1 = [40; 35]; ang2 = [15; 35];
ang =
17-208
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
32 12
The resulting broadside angles are different from either the azimuth or elevation angles. We can
convert the broadside angles to the azimuth angles if we know the elevation.
ang = broadside2az(ang,35)
ang =
40.3092 14.7033
Next, we illustrate DOA estimation using a 10-by-5 uniform rectangular array (URA). A URA can
estimate both azimuth and elevation angles. The element spacing is 0.3 meters between each row,
and 0.5 meters between each column.
Assume that two signals impinge on the URA. The first signal arrives from 40° in azimuth and 45° in
elevation, while the second signal arrives from -20° in azimuth and 20° in elevation.
Create a 2-D beamscan estimator object from the URA. This object uses the same algorithm as the 1-
D case except that it scans over both azimuth and elevation instead of broadside angles.
azelspectrum = phased.BeamscanEstimator2D('SensorArray',ura,...
'OperatingFrequency',fc,...
'AzimuthScanAngles',-45:45,'ElevationScanAngles',10:60,...
'DOAOutputPort',true,'NumSignals',2);
The DOA output is a 2-by-N matrix where N is the number of signal directions. The first row contains
azimuth angles while the second row contains elevation angles.
[~,ang] = azelspectrum(signal)
ang =
40 -20
45 20
17-209
17 Featured Examples
plotSpectrum(azelspectrum);
Similar to the ULA case, we use a 2-D version of the MVDR algorithm. Since our knowledge of the
sensor positions is perfect, we expect the MVDR spectrum to have a better resolution than beamscan.
mvdrazelspectrum = phased.MVDREstimator2D('SensorArray',ura,...
'OperatingFrequency',fc,...
'AzimuthScanAngles',-45:45,'ElevationScanAngles',10:60,...
'DOAOutputPort',true,'NumSignals',2);
[~,ang] = mvdrazelspectrum(signal)
plotSpectrum(mvdrazelspectrum);
ang =
-20 40
20 45
17-210
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
We can also use the MUSIC algorithm to estimate the directions of arrival of the two sources.
musicazelspectrum = phased.MUSICEstimator2D('SensorArray',ura,...
'OperatingFrequency',fc,...
'AzimuthScanAngles',-45:45,'ElevationScanAngles',10:60,...
'DOAOutputPort',true,'NumSignalsSource','Property','NumSignals',2);
[~,ang] = musicazelspectrum(signal)
plotSpectrum(musicazelspectrum);
ang =
-20 40
20 45
17-211
17 Featured Examples
To compare MVDR and MUSIC estimators, let's consider sources located even closer together. Using
MVDR and MUSIC, compute the spatial spectrum of two sources located at 10° in azimuth and
separated by 3° in elevation.
angmvdr =
10 -27
22 21
angmusic =
10 10
23 20
In this case, only MUSIC correctly estimates to directions of arrival for the two sources. To see why,
plot an elevation cut of each spatial spectrum at 10° azimuth.
17-212
Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC
ymvdr = mvdrazelspectrum(signal);
ymusic = musicazelspectrum(signal);
helperPlotDOASpectra(mvdrazelspectrum.ElevationScanAngles,...
musicazelspectrum.ElevationScanAngles,ymvdr(:,56),ymusic(:,56),'URA')
Since the MUSIC spectrum has better spatial resolution than MVDR, MUSIC correctly identifies the
sources while MVDR fails to do so.
Summary
In this example, we showed how to apply the beamscan, MVDR, and MUSIC techniques to the DOA
estimation problem. We used both techniques to estimate broadside angles for the signals received by
a ULA. The MVDR algorithm has better resolution than beamscan when there is no sensor position
error. MUSIC has even better resolution than MVDR, but the number of sources must be known. We
also illustrated how to convert between azimuth and broadside angles. Next, we applied beamscan,
MVDR, and MUSIC to estimate both azimuth and elevation angles using a URA. In all of these cases,
we plotted the output spatial spectrum, and found again that MUSIC had the best spatial resolution.
Beamscan, MVDR, and MUSIC are techniques that can be applied to any type of array, but for ULAs
and URAs there are additional high resolution techniques that can further exploit the array geometry.
17-213
17 Featured Examples
Define a uniform linear array (ULA) composed of 10 isotropic antennas. The array element spacing is
0.5 meters.
N = 10;
ula = phased.ULA('NumElements',N,'ElementSpacing',0.5)
ula =
phased.ULA with properties:
Simulate the array output for two incident signals. Both signals are incident from 90° in azimuth.
Their elevation angles are 73° and 68° respectively. In this example, we assume that these two
directions are unknown and need to be estimated. Simulate the baseband received signal at the array
demodulated from an operating frequency of 300 MHz.
Because a ULA is symmetric around its axis, a DOA algorithm cannot uniquely determine azimuth and
elevation. Therefore, the results returned by these high-resolution DOA estimators are in the form of
broadside angles. An illustration of broadside angles can be found in the following figure.
17-214
High Resolution Direction of Arrival Estimation
ang_true = 1×2
17.0000 22.0000
Assume that we know a priori that there are two sources. To estimate the DOA, use the root-MUSIC
technique. Construct a DOA estimator using the root-MUSIC algorithm.
rootmusicangle = phased.RootMUSICEstimator('SensorArray',ula,...
'OperatingFrequency',fc,...
'NumSignalsSource','Property','NumSignals',2)
rootmusicangle =
phased.RootMUSICEstimator with properties:
17-215
17 Featured Examples
Because the array response vector of a ULA is conjugate symmetric, we can use forward-backward
(FB) averaging to perform computations with real matrices and reduce computational complexity. FB-
based estimators also have a lower variance and reduce the correlation between signals.
rootmusicangle.ForwardBackwardAveraging = true;
ang = rootmusicangle(signal)
ang = 1×2
16.9960 21.9964
We can also use an ESPRIT DOA estimator. As in the case of root-MUSIC, set the
ForwardBackwardAveraging property to true. This algorithm is also called unitary ESPRIT.
espritangle = phased.ESPRITEstimator('SensorArray',ula,...
'OperatingFrequency',fc,'ForwardBackwardAveraging',true,...
'NumSignalsSource','Property','NumSignals',2)
espritangle =
phased.ESPRITEstimator with properties:
ang = espritangle(signal)
ang = 1×2
21.9988 16.9748
17-216
High Resolution Direction of Arrival Estimation
Finally, use the MUSIC DOA estimator. MUSIC also supports forward-backward averaging. Unlike
ESPRIT and root-MUSIC, MUSIC computes a spatial spectrum for specified broadside scan angles.
Directions of arrival correspond to peaks in the MUSIC spatial spectrum.
musicangle = phased.MUSICEstimator('SensorArray',ula,...
'OperatingFrequency',fc,'ForwardBackwardAveraging',true,...
'NumSignalsSource','Property','NumSignals',2,...
'DOAOutputPort',true)
musicangle =
phased.MUSICEstimator with properties:
[~,ang] = musicangle(signal)
ang = 1×2
17 22
plotSpectrum(musicangle)
17-217
17 Featured Examples
Directions of arrival for MUSIC are limited to the scan angles in the ScanAngles property. Because
the true directions of arrival in this example coincide with the search angles in ScanAngles, MUSIC
provides precise DOA angle estimates. In practice, root-MUSIC provides superior resolution to
MUSIC. However, the MUSIC algorithm can also be used for DOA estimation in both azimuth and
elevation using a 2-D array. See “Direction of Arrival Estimation with Beamscan, MVDR, and MUSIC”
on page 17-203.
In practice, you generally do not know the number of signal sources, and need to estimate the
number of sources from the received signal. You can estimate the number of signal sources by
specifying 'Auto' for the NumSignalsSource property and choosing either 'AIC' or 'MDL' for the
NumSignalsMethod property. For AIC, the Akaike information criterion (AIC) is used, and for MDL,
the minimum description length (MDL) criterion is used.
Before you can set NumSignalsSource, you must release the DOA object because it is locked to
improve efficiency during processing.
release(espritangle);
espritangle.NumSignalsSource = 'Auto';
espritangle.NumSignalsMethod = 'AIC';
ang = espritangle(signal)
ang = 1×2
21.9988 16.9748
17-218
High Resolution Direction of Arrival Estimation
In addition to forward-backward averaging, other methods can reduce computational complexity. One
of these approaches is to solve an equivalent problem with reduced dimensions in beamspace. While
the ESPRIT algorithm performs the eigenvalue decomposition (EVD) of a 10x10 real matrix in our
example, the beamspace version can reduce the problem to the EVD of a 3x3 real matrix. This
technique uses a priori knowledge of the sector where the signals are located to position the center of
the beam fan. In this example, point the beam fan to 20° in azimuth.
bsespritangle = phased.BeamspaceESPRITEstimator('SensorArray',ula,...
'OperatingFrequency',fc,...
'NumBeamsSource','Property','NumBeams',3,...
'BeamFanCenter',20);
ang = bsespritangle(signal)
ang = 1×2
21.9875 16.9943
Another technique is the root-weighted subspace fitting (WSF) algorithm. This algorithm is iterative
and the most demanding in terms of computational complexity. You can set the maximum number of
iterations by specifying the MaximumIterationCount property to maintain the cost below a specific
limit.
rootwsfangle = phased.RootWSFEstimator('SensorArray',ula,...
'OperatingFrequency',fc,'MaximumIterationCount',2);
ang = rootwsfangle(signal)
ang = 1×2
21.9962 16.9961
Optimizing Performance
In addition to FB averaging, you can use row weighting to improve the statistical performance of the
element-space ESPRIT estimator. Row weighting is a technique that applies different weights to the
rows of the signal subspace matrix. The row weighting parameter determines the maximum weight.
In most cases, it is chosen to be as large as possible. However, its value can never be greater than
(N-1)/2, where N is the number of elements of the array.
release(espritangle);
espritangle.RowWeighting = 4
espritangle =
phased.ESPRITEstimator with properties:
17-219
17 Featured Examples
ang = espritangle(signal)
ang = 1×2
21.9884 17.0003
If several sources are correlated or coherent (as in multipath environments), the spatial covariance
matrix becomes rank deficient and subspace-based DOA estimation methods may fail. To show this,
model a received signal composed of 4 narrowband components. Assume that 2 of the first 3 signals
are multipath reflections of the first source, having magnitudes equal to 1/4 and 1/2 that of the first
source, respectively.
scov = eye(4);
magratio = [1;0.25;0.5];
scov(1:3,1:3) = magratio*magratio';
All signals are incident at 0° elevation, with azimuth incident angles of -23°, 0°, 12°, and 40°.
% Incident azimuth
az_ang = [-23 0 12 40];
% When the elevation is zero, the azimuth within [-90 90] is the same as
% the broadside angle.
el_ang = zeros(1,4);
Compare the performance of the DOA algorithm when sources are coherent. To simplify the example,
run only one trial per algorithm. Given the high SNR, the results will be a good indicator of
estimation accuracy.
First, verify that the AIC criterion underestimates the number of sources, causing the unitary ESPRIT
algorithm to give incorrect estimates. The AIC estimates the number of sources as two because three
sources are correlated.
release(espritangle);
espritangle.NumSignalsSource = 'Auto';
espritangle.NumSignalsMethod = 'AIC';
ang = espritangle(signal)
ang = 1×2
-15.3535 40.0024
The root-WSF algorithm is robust in the context of correlated signals. With the correct number of
sources as an input, the algorithm correctly estimates the directions of arrival.
release(rootwsfangle);
rootwsfangle.NumSignalsSource = 'Property';
rootwsfangle.NumSignals = 4;
ang = rootwsfangle(signal)
17-220
High Resolution Direction of Arrival Estimation
ang = 1×4
ESPRIT, root-MUSIC, and MUSIC, however, fail to estimate the correct directions of arrival, even if
we specify the number of sources and use the unitary implementations.
release(rootmusicangle);
rootmusicangle.NumSignalsSource = 'Property';
rootmusicangle.NumSignals = 4;
rootmusicangle.ForwardBackwardAveraging = true;
ang = rootmusicangle(signal)
ang = 1×4
You can apply spatial smoothing to estimate the DOAs of correlated signals. Using spatial smoothing,
however, decreases the effective aperture of the array. Therefore, the variance of the estimators
increases because the subarrays are smaller than the original array.
release(rootmusicangle);
Nr = 2; % Number of multipath reflections
rootmusicangle.SpatialSmoothing = Nr
rootmusicangle =
phased.RootMUSICEstimator with properties:
ang = rootmusicangle(signal)
ang = 1×4
In summary, ESPRIT, MUSIC, root-MUSIC, and root-WSF are important DOA algorithms that provide
good performance and reasonable computational complexity for ULAs. Unitary ESPRIT, unitary root-
MUSIC, and beamspace unitary ESPRIT provide ways to significantly reduce the computational cost
of the estimators, while also improving their performance. root-WSF is particularly attractive in the
context of correlated sources because, contrary to the other methods, it does not require spatial
smoothing to properly estimate the DOAs when the number of sources is known.
17-221
17 Featured Examples
Introduction
There are two categories of multiple input multiple output (MIMO) radars. Multistatic radars form
the first category. They are often referred to as statistical MIMO radars. Coherent MIMO radars form
the second category and are the focus of this example. A benefit of coherent MIMO radar signal
processing is the ability to increase the angular resolution of the physical antenna array by forming a
virtual array.
Virtual Array
A virtual array can be created by quasi-monostatic MIMO radars, where the transmit and receive
arrays are closely located. To better understand the virtual array concept, first look at the two-way
pattern of a conventional phased array radar. The two-way pattern of a phased array radar is the
product of its transmit array pattern and receive array pattern. For example, consider a 77 GHz
millimeter wave radar with a 2-element transmit array and a 4-element receive array.
fc = 77e9;
c = 3e8;
lambda = c/fc;
Nt = 2;
Nr = 4;
If both arrays have half-wavelength spacing, which are sometimes referred to as full arrays, then the
two-way pattern is close to the receive array pattern.
dt = lambda/2;
dr = lambda/2;
txarray = phased.ULA(Nt,dt);
rxarray = phased.ULA(Nr,dr);
ang = -90:90;
pattx = pattern(txarray,fc,ang,0,'Type','powerdb');
patrx = pattern(rxarray,fc,ang,0,'Type','powerdb');
pat2way = pattx+patrx;
17-222
Increasing Angular Resolution with MIMO Radars
If the full transmit array is replaced with a thin array, meaning the element spacing is wider than half
wavelength, then the two-way pattern has a narrower beamwidth. Notice that even though the thin
transmit array has grating lobes, those grating lobes are not present in the two-way pattern.
dt = Nr*lambda/2;
txarray = phased.ULA(Nt,dt);
pattx = pattern(txarray,fc,ang,0,'Type','powerdb');
pat2way = pattx+patrx;
helperPlotMultipledBPattern(ang,[pat2way pattx patrx],[-30 0],...
{'Two-way Pattern','Tx Pattern','Rx Pattern'},...
'Patterns of thin/full arrays - 2Tx, 4Rx',...
{'-','--','-.'});
17-223
17 Featured Examples
The two-way pattern of this system corresponds to the pattern of a virtual receive array with 2 x 4 =
8 elements. Thus, by carefully choosing the geometry of the transmit and the receive arrays, we can
increase the angular resolution of the system without adding more antennas to the arrays.
varray = phased.ULA(Nt*Nr,dr);
patv = pattern(varray,fc,ang,0,'Type','powerdb');
helperPlotMultipledBPattern(ang,[pat2way patv],[-30 0],...
{'Two-way Pattern','Virtual Array Pattern'},...
'Patterns of thin/full arrays and virtual array',...
{'-','--'},[1 2]);
17-224
Increasing Angular Resolution with MIMO Radars
In a coherent MIMO radar system, each antenna of the transmit array transmits an orthogonal
waveform. Because of this orthogonality, it is possible to recover the transmitted signals at the
receive array. The measurements at the physical receive array corresponding to each orthogonal
waveform can then be stacked to form the measurements of the virtual array.
Note that since each element in the transmit array radiates independently, there is no transmit
beamforming, so the transmit pattern is broad and covers a large field of view (FOV). This allows the
simultaneous illumination of all targets in the FOV. The receive array can then generate multiple
beams to process all target echoes. Compared to conventional phased array radars that need
successive scans to cover the entire FOV, this is another advantage of MIMO radars for applications
that require fast reaction time.
Time division multiplexing (TDM) is one way to achieve orthogonality among transmit channels. The
remainder of this example shows how to model and simulate a TDM-MIMO frequency-modulated
17-225
17 Featured Examples
continuous wave (FMCW) automotive radar system. The waveform characteristics are adopted from
the “Automotive Adaptive Cruise Control Using FMCW Technology” on page 17-367 example.
waveform = helperDesignFMCWWaveform(c,lambda);
fs = waveform.SampleRate;
Imagine that there are two cars in the FOV with a separation of 20 degrees. As seen in the earlier
array pattern plots of this example, the 3dB beamwidth of a 4-element receive array is around 30
degrees so conventional processing would not be able to separate the two targets in the angular
domain. The radar sensor parameters are as follows:
transmitter = phased.Transmitter('PeakPower',0.001,'Gain',36);
receiver = phased.ReceiverPreamp('Gain',40,'NoiseFigure',4.5,'SampleRate',fs);
txradiator = phased.Radiator('Sensor',txarray,'OperatingFrequency',fc,...
'PropagationSpeed',c,'WeightsInputPort',true);
rxcollector = phased.Collector('Sensor',rxarray,'OperatingFrequency',fc,...
'PropagationSpeed',c);
Define the position and motion of the ego vehicle and the two cars in the FOV.
cars = phased.RadarTarget('MeanRCS',car_rcs,'PropagationSpeed',c,'OperatingFrequency',fc);
carmotion = phased.Platform('InitialPosition',car_pos,'Velocity',[car_speed;0 0;0 0]);
channel = phased.FreeSpace('PropagationSpeed',c,...
'OperatingFrequency',fc,'SampleRate',fs,'TwoWayPropagation',true);
The raw data cube received by the physical array of the TDM MIMO radar can then be simulated as
follows:
rng(2017);
Nsweep = 64;
Dn = 2; % Decimation factor
fs = fs/Dn;
xr = complex(zeros(fs*waveform.SweepTime,Nr,Nsweep));
for m = 1:Nsweep
% Update radar and target positions
[radar_pos,radar_vel] = radarmotion(waveform.SweepTime);
[tgt_pos,tgt_vel] = carmotion(waveform.SweepTime);
[~,tgt_ang] = rangeangle(tgt_pos,radar_pos);
17-226
Increasing Angular Resolution with MIMO Radars
txsig = transmitter(sig);
The data cube received by the physical array must be processed to form the virtual array data cube.
For the TDM-MIMO radar system used in this example, the measurements corresponding to the two
transmit antenna elements can be recovered from two consecutive sweeps by taking every other page
of the data cube.
Nvsweep = Nsweep/2;
xr1 = xr(:,:,1:2:end);
xr2 = xr(:,:,2:2:end);
Now the data cube in xr1 contains the return corresponding to the first transmit antenna element,
and the data cube in xr2 contains the return corresponding to the second transmit antenna element.
Hence, the data cube from the virtual array can be formed as:
xrv = cat(2,xr1,xr2);
Next, perform range-Doppler processing on the virtual data cube. Because the range-Doppler
processing is linear, the phase information is preserved. Therefore, the resulting response can be
used later to perform further spatial processing on the virtual aperture.
nfft_r = 2^nextpow2(size(xrv,1));
nfft_d = 2^nextpow2(size(xrv,3));
rngdop = phased.RangeDopplerResponse('PropagationSpeed',c,...
'DopplerOutput','Speed','OperatingFrequency',fc,'SampleRate',fs,...
'RangeMethod','FFT','PRFSource','Property',...
'RangeWindow','Hann','PRF',1/(Nt*waveform.SweepTime),...
'SweepSlope',waveform.SweepBandwidth/waveform.SweepTime,...
'RangeFFTLengthSource','Property','RangeFFTLength',nfft_r,...
'DopplerFFTLengthSource','Property','DopplerFFTLength',nfft_d,...
'DopplerWindow','Hann');
[resp,r,sp] = rngdop(xrv);
17-227
17 Featured Examples
The resulting resp is a data cube containing the range-Doppler response for each element in the
virtual array. As an illustration, the range-Doppler map for the first element in the virtual array is
shown.
plotResponse(rngdop,squeeze(xrv(:,1,:)));
The detection can be performed on the range-Doppler map from each pair of transmit and receive
element to identify the targets in scene. In this example, a simple threshold-based detection is
performed on the map obtained between the first transmit element and the first receive element,
which corresponds to the measurement at the first element in the virtual array. Based on the range-
Doppler map shown in the previous figure, the threshold is set at 10 dB below the maximum peak.
respmap = squeeze(mag2db(abs(resp(:,1,:))));
ridx = helperRDDetection(respmap,-10);
Based on the detected range of the targets, the corresponding range cuts can be extracted from the
virtual array data cube to perform further spatial processing. To verify that the virtual array provides
a higher resolution compared to the physical array, the code below extracts the range cuts for both
targets and combines them into a single data matrix. The beamscan algorithm is then performed over
these virtual array measurements to estimate the directions of the targets.
xv = squeeze(sum(resp(ridx,:,:),1))';
doa = phased.BeamscanEstimator('SensorArray',varray,'PropagationSpeed',c,...
'OperatingFrequency',fc,'DOAOutputPort',true,'NumSignals',2,'ScanAngles',ang);
[Pdoav,target_az_est] = doa(xv);
17-228
Increasing Angular Resolution with MIMO Radars
fprintf('target_az_est = [%s]\n',num2str(target_az_est));
The two targets are successfully separated. The actual angles for the two cars are -10 and 10
degrees.
The next figure compares the spatial spectrums from the virtual and the physical receive array.
doarx = phased.BeamscanEstimator('SensorArray',rxarray,'PropagationSpeed',c,...
'OperatingFrequency',fc,'DOAOutputPort',true,'ScanAngles',ang);
Pdoarx = doarx(xr);
In this example, the detection is performed on the range-Doppler map without spatial processing of
the virtual array data cube. This works because the SNR is high. If the SNR is low, it is also possible
to process the virtual array blindly across the entire range-Doppler map to maximize the SNR before
the detection.
Although a TDM-MIMO radar's processing chain is relatively simple, it uses only one transmit
antenna at a time. Therefore, it does not take advantage of the full capacity of the transmit array. To
improve the efficiency, there are other orthogonal waveforms that can be used in a MIMO radar.
17-229
17 Featured Examples
Using the same configuration as the example, one scheme to achieve orthogonality is to have one
element always transmit the same FMCW waveform, while the second transmit element reverses the
phase of the FMCW waveform for each sweep. This way both transmit elements are active in all
sweeps. For the first sweep, the two elements transmit the same waveform, and for the second
sweep, the two elements transmit the waveform with opposite phase, and so on. This is essentially
coding the consecutive sweeps from different elements with a Hadamard code. It is similar to the
Alamouti codes used in MIMO communication systems.
MIMO radars can also adopt phase-coded waveforms in MIMO radar. In this case, each radiating
element can transmit a uniquely coded waveform, and the receiver can then have a matched filter
bank corresponding to each of those phase coded waveform. The signals can then be recovered and
processed to form the virtual array.
Summary
In this example, we gave a brief introduction to coherent MIMO radar and the virtual array concept.
We simulated the return of a MIMO radar with a 2-element transmit array and a 4-element receive
array and performed direction of arrival estimation of the simulated echos of two closely spaced
targets using an 8-element virtual array.
References
[1] Frank Robey, et al. MIMO Radar Theory and Experimental Results, Conference Record of the
Thirty Eighth Asilomar Conference on Signals, Systems and Computers, California, pp. 300-304,
2004.
[2] Eli Brookner, MIMO Radars and Their Conventional Equivalents, IEEE Radar Conference, 2015.
[3] Sandeep Rao, MIMO Radar, Texas Instruments Application Report SWRA554, May 2017.
[4] Jian Li and Peter Stoica, MIMO Radar Signal Processing, John Wiley & Sons, 2009.
17-230
Introduction to Space-Time Adaptive Processing
Introduction
In a ground moving target indicator (GMTI) system, an airborne radar collects the returned echo
from the moving target on the ground. However, the received signal contains not only the reflected
echo from the target, but also the returns from the illuminated ground surface. The return from the
ground is generally referred to as clutter.
The clutter return comes from all the areas illuminated by the radar beam, so it occupies all range
bins and all directions. The total clutter return is often much stronger than the returned signal echo,
which poses a great challenge to target detection. Clutter filtering, therefore, is a critical part of a
GMTI system.
In traditional MTI systems, clutter filtering often takes advantage of the fact that the ground does not
move. Thus, the clutter occupies the zero Doppler bin in the Doppler spectrum. This principle leads to
many Doppler-based clutter filtering techniques, such as pulse canceller. Interested readers can refer
to “Ground Clutter Mitigation with Moving Target Indication (MTI) Radar” on page 17-461 for a
detailed example of the pulse canceller. When the radar platform itself is also moving, such as in a
plane, the Doppler component from the ground return is no longer zero. In addition, the Doppler
components of clutter returns are angle dependent. In this case, the clutter return is likely to have
energy across the Doppler spectrum. Therefore, the clutter cannot be filtered only with respect to
Doppler frequency.
Jamming is another significant interference source that is often present in the received signal. The
simplest form of jamming is a barrage jammer, which is strong, continuous white noise directed
toward the radar receiver so that the receiver cannot easily detect the target return. The jammer is
usually at a specific location, and the jamming signal is therefore associated with a specific direction.
However, because of the white noise nature of the jammer, the received jamming signal occupies the
entire Doppler band.
STAP techniques filter the signal in both the angular and Doppler domains (thus, the name "space-
time adaptive processing") to suppress the clutter and jammer returns. In the following sections, we
simulate returns from target, clutter, and jammer and illustrate how STAP techniques filter the
interference from the received signal.
System Setup
We first define a radar system, starting from the system built in the example “Designing a Basic
Monostatic Pulse Radar” on page 17-449.
Antenna Definition
Assume that the antenna element has an isotropic response in the front hemisphere and all zeros in
the back hemisphere. The operating frequency range is set to 8 to 12 GHz to match the 10 GHz
operating frequency of the system.
17-231
17 Featured Examples
antenna = phased.IsotropicAntennaElement...
('FrequencyRange',[8e9 12e9],'BackBaffled',true); % Baffled Isotropic
Define a 6-element ULA with a custom element pattern. The element spacing is assumed to be one
half the wavelength of the waveform.
fc = radiator.OperatingFrequency;
c = radiator.PropagationSpeed;
lambda = c/fc;
ula = phased.ULA('Element',antenna,'NumElements',6,...
'ElementSpacing', lambda/2);
pattern(ula,fc,'PropagationSpeed',c,'Type','powerdb')
title('6-element Baffled ULA Response Pattern')
view(60,50)
Radar Setup
Next, mount the antenna array on the radiator/collector. Then, define the radar motion. The radar
system is mounted on a plane that flies 1000 meters above the ground. The plane is flying along the
array axis of the ULA at a speed such that it travels a half element spacing of the array during one
pulse interval. (An explanation of such a setting is provided in the DPCA technique section that
follows.)
radiator.Sensor = ula;
collector.Sensor = ula;
sensormotion = phased.Platform('InitialPosition',[0; 0; 1000]);
17-232
Introduction to Space-Time Adaptive Processing
Target
Next, define a nonfluctuating target with a radar cross section of 1 square meter moving on the
ground.
Jammer
The target returns the desired signal; however, several interferences are also present in the received
signal. This section focuses on the jammer. Define a simple barrage jammer with an effective radiated
power of 100 watts.
jammer = phased.BarrageJammer('ERP',100);
Fs = waveform.SampleRate;
rngbin = c/2*(0:1/Fs:1/prf-1/Fs).';
jammer.SamplesPerFrame = numel(rngbin);
jammermotion = phased.Platform('InitialPosition',[1000; 1732; 1000]);
Clutter
In this example we simulate the clutter using the constant gamma model with a gamma value of -15
dB. Literature shows that such a gamma value can be used to model terrain covered by woods. For
each range, the clutter return can be thought of as a combination of the returns from many small
clutter patches on that range ring. Since the antenna is back baffled, the clutter contribution is only
from the front. To simplify the computation, use an azimuth width of 10 degrees for each patch.
clutter = phased.ConstantGammaClutter('Sensor',ula,'SampleRate',Fs,...
'Gamma',-15,'PlatformHeight',1000,...
'OperatingFrequency',fc,...
'PropagationSpeed',c,...
'PRF',prf,...
'TransmitERP',transmitter.PeakPower*db2pow(transmitter.Gain),...
'PlatformSpeed',norm(sensormotion.Velocity),...
'PlatformDirection',[90;0],...
'BroadsideDepressionAngle',0,...
'MaximumRange',5000,...
'AzimuthCoverage',180,...
'PatchAzimuthWidth',10,...
'OutputFormat','Pulses');
Propagation Paths
Finally, create a free space environment to represent the target and jammer paths. Because we are
using a monostatic radar system, the target channel is set to simulate two-way propagation delays.
The jammer path computes only one-way propagation delays.
tgtchannel = phased.FreeSpace('TwoWayPropagation',true,'SampleRate',Fs,...
'OperatingFrequency', fc);
17-233
17 Featured Examples
jammerchannel = phased.FreeSpace('TwoWayPropagation',false,...
'SampleRate',Fs,'OperatingFrequency', fc);
Simulation Loop
We are now ready to simulate the returns. Collect 10 pulses before processing. The seed of the
random number generator from the jammer model is set to a constant to get reproducible results.
jammer.SeedSource = 'Property';
jammer.Seed = 5;
clutter.SeedSource = 'Property';
clutter.Seed = 5;
for m = 1:numpulse
The target azimuth angle is 45 degrees, and the elevation angle is about -35.27 degrees.
17-234
Introduction to Space-Time Adaptive Processing
tgtLocation = global2localcoord(tgtpos,'rs',sensorpos);
tgtAzAngle = tgtLocation(1)
tgtAzAngle = 44.9981
tgtElAngle = tgtLocation(2)
tgtElAngle = -35.2651
tgtRng = tgtLocation(3)
tgtRng = 1.7320e+03
ans = 0.2116
The total received signal contains returns from the target, clutter and jammer combined. The signal
is a data cube with three dimensions (range bins x number of elements x number of pulses). Notice
that the clutter return dominates the total return and masks the target return. We cannot detect the
target (blue vertical line) without further processing at this stage.
ReceivePulse = tjcsig;
plot([tgtRng tgtRng],[0 0.01],rngbin,abs(ReceivePulse(:,:,1)));
xlabel('Range (m)'), ylabel('Magnitude');
title('Signals collected by the ULA within the first pulse interval')
17-235
17 Featured Examples
Now, examine the returns in 2-D angle Doppler (or space-time) domain. In general, the response is
generated by scanning all ranges and azimuth angles for a given elevation angle. Because we know
exactly where the target is, we can calculate its range and elevation angle with respect to the
antenna array.
tgtCellIdx = val2ind(tgtRng,c/(2*Fs));
snapshot = shiftdim(ReceivePulse(tgtCellIdx,:,:)); % Remove singleton dim
angdopresp = phased.AngleDopplerResponse('SensorArray',ula,...
'OperatingFrequency',fc, 'PropagationSpeed',c,...
'PRF',prf, 'ElevationAngle',tgtElAngle);
plotResponse(angdopresp,snapshot,'NormalizeDoppler',true);
text(tgtAzAngle,tgtDp/prf,'+ Target')
17-236
Introduction to Space-Time Adaptive Processing
If we look at the angle Doppler response which is dominated by the clutter return, we see that the
clutter return occupies not only the zero Doppler, but also other Doppler bins. The Doppler of the
clutter return is also a function of the angle. The clutter return looks like a diagonal line in the entire
angle Doppler space. Such a line is often referred to as clutter ridge. The received jammer signal is
white noise, which spreads over the entire Doppler spectrum at a particular angle, around 60
degrees.
The displaced phase center antenna (DPCA) algorithm is often considered to be the first STAP
algorithm. This algorithm uses the shifted aperture to compensate for the platform motion so that the
clutter return does not change from pulse to pulse. Thus, this algorithm can remove the clutter via a
simple subtraction of two consecutive pulses.
A DPCA canceller is often used on ULAs but requires special platform motion conditions. The
platform must move along the antenna's array axis and at a speed such that during one pulse interval,
the platform travels exactly half of the element spacing. The system used here is set up, as described
in earlier sections, to meet these conditions.
Assume that N is the number of ULA elements. The clutter return received at antenna 1 through
antenna N-1 during the first pulse is the same as the clutter return received at antenna 2 through
antenna N during the second pulse. By subtracting the pulses received at these two subarrays during
the two pulse intervals, the clutter can be cancelled out. The cost of this method is an aperture that is
one element smaller than the original array.
17-237
17 Featured Examples
Now, define a DPCA canceller. The algorithm may need to search through all combinations of angle
and Doppler to locate a target, but for the example, because we know exactly where the target is, we
can direct our processor to that point.
stapdpca =
phased.DPCACanceller with properties:
First, apply the DPCA canceller to both the target return and the clutter return.
ReceivePulse = tcsig;
[y,w] = stapdpca(ReceivePulse,tgtCellIdx);
The processed data combines all information in space and across the pulses to become a single pulse.
Next, examine the processed signal in the time domain.
17-238
Introduction to Space-Time Adaptive Processing
The signal now is clearly distinguishable from the noise and the clutter has been filtered out. From
the angle Doppler response of the DPCA processor weights below, we can also see that the weights
produce a deep null along the clutter ridge.
angdopresp.ElevationAngle = 0;
plotResponse(angdopresp,w,'NormalizeDoppler',true);
title('DPCA Weights Angle Doppler Response at 0 degrees Elevation')
17-239
17 Featured Examples
Although the results obtained by DPCA are very good, the radar platform has to satisfy very strict
requirements in its movement to use this technique. Also, the DPCA technique cannot suppress the
jammer interference.
Applying DPCA processing to the total signal produces the result shown in the following figure. We
can see that DPCA cannot filter the jammer from the signal. The resulting angle Doppler pattern of
the weights is the same as before. Thus, the processor cannot adapt to the newly added jammer
interference.
ReceivePulse = tjcsig;
[y,w] = stapdpca(ReceivePulse,tgtCellIdx);
plot([tgtRng tgtRng],[0 8e-4],rngbin,abs(y));
xlabel('Range (m)'), ylabel('Magnitude');
title('DPCA Canceller Output (with Jammer)')
17-240
Introduction to Space-Time Adaptive Processing
plotResponse(angdopresp,w,'NormalizeDoppler',true);
title('DPCA Weights Angle Doppler Response at 0 degrees Elevation')
17-241
17 Featured Examples
To suppress the clutter and jammer simultaneously, we need a more sophisticated algorithm. The
optimum receiver weights, when the interference is Gaussian-distributed, are given by [1]
w = kR−1s
where k is a scalar factor, R is the space-time covariance matrix of the interference signal, and s is
the desired space-time steering vector. The exact information of R is often unavailable, so we will use
the sample matrix inversion (SMI) algorithm. The algorithm estimates R from training-cell samples
and then uses it in the aforementioned equation.
Now, define an SMI beamformer and apply it to the signal. In addition to the information needed in
DPCA, the SMI beamformer needs to know the number of guard cells and the number of training
cells. The algorithm uses the samples in the training cells to estimate the interference. Thus, we
should not use the cells that are close to the target cell for the estimates because they may contain
some target information, i.e., we should define guard cells. The number of guard cells must be an
even number to be split equally in front of and behind the target cell. The number of training cells
also must be an even number and split equally in front of and behind the target. Normally, the larger
the number of training cells, the better the interference estimate.
17-242
Introduction to Space-Time Adaptive Processing
'WeightsOutputPort', true,...
'NumGuardCells', 4, 'NumTrainingCells', 100)
stapsmi =
phased.STAPSMIBeamformer with properties:
[y,w] = stapsmi(ReceivePulse,tgtCellIdx);
plot([tgtRng tgtRng],[0 2e-6],rngbin,abs(y));
xlabel('Range (m)'), ylabel('Magnitude');
title('SMI Beamformer Output (with Jammer)')
plotResponse(angdopresp,w,'NormalizeDoppler',true);
title('SMI Weights Angle Doppler Response at 0 degrees Elevation')
17-243
17 Featured Examples
The result shows that an SMI beamformer can distinguish signals from both the clutter and the
jammer. The angle Doppler pattern of the SMI weights shows a deep null along the jammer direction.
SMI provides the maximum degrees of freedom, and hence, the maximum gain among all STAP
algorithms. It is often used as a baseline for comparing different STAP algorithms.
Although SMI is the optimum STAP algorithm, it has several innate drawbacks, including a high
computation cost because it uses the full dimension data of each cell. More importantly, SMI requires
a stationary environment across many pulses. This kind of environment is not often found in real
applications. Therefore, many reduced dimension STAP algorithms have been proposed.
An adaptive DPCA (ADPCA) canceller filters out the clutter in the same manner as DPCA, but it also
has the capability to suppress the jammer as it estimates the interference covariance matrix using
two consecutive pulses. Because there are only two pulses involved, the computation is greatly
reduced. In addition, because the algorithm is adapted to the interference, it can also tolerate some
motion disturbance.
Now, define an ADPCA canceller, and then apply it to the received signal.
17-244
Introduction to Space-Time Adaptive Processing
stapadpca =
phased.ADPCACanceller with properties:
[y,w] = stapadpca(ReceivePulse,tgtCellIdx);
plot([tgtRng tgtRng],[0 2e-6],rngbin,abs(y));
xlabel('Range (m)'), ylabel('Magnitude');
title('ADPCA Canceller Output (with Jammer)')
plotResponse(angdopresp,w,'NormalizeDoppler',true);
title('ADPCA Weights Angle Doppler Response at 0 degrees Elevation')
17-245
17 Featured Examples
The time domain plot shows that the signal is successfully recovered. The angle Doppler response of
the ADPCA weights is similar to the one produced by the SMI weights.
Summary
This example presented a brief introduction to space-time adaptive processing and illustrated how to
use different STAP algorithms, namely, SMI, DPCA, and ADPCA, to suppress clutter and jammer
interference in the received pulses.
Reference
[1] J. R. Guerci, Space-Time Adaptive Processing for Radar, Artech House, 2003
17-246
Source Localization Using Generalized Cross Correlation
Introduction
Source localization differs from direction-of-arrival (DOA) estimation. DOA estimation seeks to
determine only the direction of a source from a sensor. Source localization determines its position. In
this example, source localization consists of two steps, the first of which is DOA estimation.
1 Estimate the direction of the source from each sensor array using a DOA estimation algorithm.
For wideband signals, many well-known direction of arrival estimation algorithms, such as
Capon's method or MUSIC, cannot be applied because they employ the phase difference between
elements, making them suitable only for narrowband signals. In the wideband case, instead of
phase information, you can use the difference in the signal's time-of-arrival among elements. To
compute the time-of-arrival differences, this example uses the generalized cross-correlation with
phase transformation (GCC-PHAT) algorithm. From the differences in time-of-arrival, you can
compute the DOA. (For another example of narrowband DOA estimation algorithms, see “High
Resolution Direction of Arrival Estimation” on page 17-214).
2 Calculate the source position by triangulation. First, draw straight lines from the arrays along
the directions-of-arrival. Then, compute the intersection of these two lines. This is the source
location. Source localization requires knowledge of the position and orientation of the receiving
sensors or sensor arrays.
Triangulation Formula
The triangulation algorithm is based on simple trigonometric formulas. Assume that the sensor arrays
are located at the 2-D coordinates (0,0) and (L,0) and the unknown source location is (x,y). From
knowledge of the sensor arrays positions and the two directions-of-arrival at the arrays, θ1 and θ2,
you can compute the (x,y) coordinates from
L = ytanθ1 + ytanθ2
y = L/(tanθ1 + tanθ2)
x = ytanθ1
The remainder of this example shows how you can use the functions and System objects of the
Phased Array System Toolbox™ to compute source position.
Set up two receiving 4-element ULAs aligned along the x-axis of the global coordinate system and
spaced 50 meters apart. The phase center of the first array is (0,0,0) . The phase center of the second
array is (50,0,0) . The source is located at (30,100) meters. As indicated in the figure, the receiving
array gains point in the +y direction. The source transmits in the -y direction.
17-247
17 Featured Examples
Create a 4-element receiver ULA of omnidirectional microphones. You can use the same phased.ULA
System object™ for the phased.WidebandCollector and phased.GCCEstimator System objects
for both arrays.
N = 4;
rxULA = phased.ULA('Element',phased.OmnidirectionalMicrophoneElement,...
'NumElements',N);
Specify the position and orientation of the first sensor array. When you create a ULA, the array
elements are automatically spaced along the y-axis. You must rotate the local axes of the array by 90°
to align the elements along the x-axis of the global coordinate system.
rxpos1 = [0;0;0];
rxvel1 = [0;0;0];
rxax1 = azelaxes(90,0);
Specify the position and orientation of the second sensor array. Choose the local axes of the second
array to align with the local axes of the first array.
rxpos2 = [L;0;0];
rxvel2 = [0;0;0];
rxax2 = rxax1;
17-248
Source Localization Using Generalized Cross Correlation
srcax = azelaxes(-90,0);
srcULA = phased.OmnidirectionalMicrophoneElement;
Define Waveform
Choose the source signal to be a wideband LFM waveform. Assume the operating frequency of the
system is 300 kHz and set the bandwidth of the signal to 100 kHz. Assume a maximum operating
range of 150 m. Then, you can set the pulse repetition interval (PRI) and the pulse repetition
frequency (PRF). Assume a 10% duty cycle and set the pulse width. Finally, use a speed of sound in an
underwater channel of 1500 m/s.
Set the LFM waveform parameters and create the phased.LinearFMWaveform System object™.
signal = waveform();
Modeling the radiation and propagation for wideband systems is more complicate than modeling
narrowband systems. For example, the attenuation depends on frequency. The Doppler shift as well
as the phase shifts among elements due to the signal incoming direction also vary according to the
frequency. Thus, it is critical to model those behaviors when dealing with wideband signals. This
example uses a subband approach.
nfft = 128;
radiator = phased.WidebandRadiator('Sensor',srcULA,...
'PropagationSpeed',c,'SampleRate',fs,...
'CarrierFrequency',fc,'NumSubbands',nfft);
collector1 = phased.WidebandCollector('Sensor',rxULA,...
'PropagationSpeed',c,'SampleRate',fs,...
'CarrierFrequency',fc,'NumSubbands',nfft);
collector2 = phased.WidebandCollector('Sensor',rxULA,...
'PropagationSpeed',c,'SampleRate',fs,...
'CarrierFrequency',fc,'NumSubbands',nfft);
Create the wideband signal propagators for the paths from the source to the two sensor arrays.
channel1 = phased.WidebandFreeSpace('PropagationSpeed',c,...
'SampleRate',fs,'OperatingFrequency',fc,'NumSubbands',nfft);
channel2 = phased.WidebandFreeSpace('PropagationSpeed',c,...
'SampleRate',fs,'OperatingFrequency',fc,'NumSubbands',nfft);
17-249
17 Featured Examples
Determine the propagation directions from the source to the sensor arrays. Propagation directions
are with respect to the local coordinate system of the source.
[~,ang1t] = rangeangle(rxpos1,srcpos,srcax);
[~,ang2t] = rangeangle(rxpos2,srcpos,srcax);
Radiate the signal from the source in the directions of the sensor arrays.
sigp1 = channel1(sigt(:,1),srcpos,rxpos1,srcvel,rxvel1);
sigp2 = channel2(sigt(:,2),srcpos,rxpos2,srcvel,rxvel2);
Compute the arrival directions of the propagated signal at the sensor arrays. Because the collector
response is a function of the directions of arrival in the sensor array local coordinate system, pass the
local coordinate axes matrices to the rangeangle function.
[~,ang1r] = rangeangle(srcpos,rxpos1,rxax1);
[~,ang2r] = rangeangle(srcpos,rxpos2,rxax2);
sigr1 = collector1(sigp1,ang1r);
sigr2 = collector2(sigp2,ang2r);
doa1 = phased.GCCEstimator('SensorArray',rxULA,'SampleRate',fs,...
'PropagationSpeed',c);
doa2 = phased.GCCEstimator('SensorArray',rxULA,'SampleRate',fs,...
'PropagationSpeed',c);
angest1 = doa1(sigr1);
angest2 = doa2(sigr2);
Triangulate the source position use the formulas established previously. Because the scenario is
confined to the x-y plane, set the z-coordinate to zero.
srcpos_est = 3×1
29.9881
100.5743
0
The estimated source location matches the true location to within 30 cm.
17-250
Source Localization Using Generalized Cross Correlation
Summary
This example showed how to perform source localization using triangulation. In particular, the
example showed how to simulate, propagate, and process wideband signals. The GCC-PHAT
algorithm is used to estimate the direction of arrival of a wideband signal.
17-251
17 Featured Examples
The model simulates the reception of three audio signals from different directions on a 10-element
uniformly linear microphone array (ULA). After the addition of thermal noise at the receiver,
beamforming is applied and the result played on a sound device.
The model consists of two stages: simulate the received audio signals and beamform the result. The
blocks that corresponds to each stage of the model are:
• Audio Sources - Subsystem reads the audio files and specifies their direction.
17-252
Acoustic Beamforming Using Microphone Arrays
• From Multimedia File - Part of the Audio Sources subsystem, each block reads audio from a
different wav file, 1000 samples at a time. Three blocks labelled source1, source2 and source3
correspond to the three sources.
• Concatenate - Concatenates the output of the three From Multimedia File blocks into a
three column matrix, one column per audio signal.
• source angles - Constant block specifies the incident directions of the audio sources to the
Wideband Rx Array block. The block outputs a 2x3 matrix. The two rows correspond to the
azimuth and elevation angles in degrees of each source, the three columns correspond to the
three audio signals.
• Wideband Rx Array - Simulates the audio signals received at the ULA. The first input port to
this block is a 1000x3 matrix. Each column corresponds to the received samples of each audio
signal. The second input port (Ang) specifies the incident direction of the pulses. The first row of
Ang specifies the azimuth angle in degree for each signal and the second row specifies the
elevation angle in degree for each signal. The second row is optional. If they not specified, the
elevation angles are assumed to be 0 degrees. The output of this block is a 1000x10 matrix. Each
column corresponds to the audio recorded at each element of the microphone array. The
microphone array's configuration is specified in the Sensor Array tab of the block dialog panel.
This configuration should match the configuration specified on the block dialog panel of the Frost
Beamformer. See the “Conventional and Adaptive Beamformers” on page 17-256 Simulink®
example to learn how to use sensor array configuration variables for conveniently sharing the
same configuration across several blocks.
17-253
17 Featured Examples
Beamforming
• Select beamform angle - Constant block controls the Multi-Port Switch output and
specifies which of the three source directions in which to beamform.
• Frost Beamformer - Performs Frost beamforming on the matrix passed via the input port X
along the direction specified via the input port Ang.
• 2-D Selector - Selects the received signal at one of the microphone elements.
• Manual switch - Switches between the non-beamformed and the beamformed audio stream sent
to the audio device.
Click on the Manual switch while running the simulation to toggle between playing the non-
beamformed audio stream and the beamformed stream. Setting a value of 1, 2, or 3 in the Select
beamform angle block while running the simulation will beamform along one of the three audio
17-254
Acoustic Beamforming Using Microphone Arrays
signals direction. You will notice that the non-beamformed audio sounds garbled while you can clearly
hear any one of the selected audio streams after beamforming.
17-255
17 Featured Examples
The first model simulates the reception of a rectangular pulse with a delay offset on a 10-element
uniformly linear antenna array (ULA). The source of the pulse is located at an azimuth of 45 degrees
and an elevation of 0 degrees. Noise with power of 0.5 watts is added to the signal at each element of
the array. A phase-shift beamformer is then applied. The example compares the output of the phase-
shift beamformer with the signal received at one of the antenna elements.
The model consists of a Signal Simulation stage and a Signal Processing stage. The blocks that
corresponds to each stage of the model are:
Signal simulation
17-256
Conventional and Adaptive Beamformers
with a carrier frequency equal to the operating frequency specified in the block's dialog panel.
The second input (Ang) specifies the incident direction of the pulses. The antenna array's
configuration is created by a helper script as a variable in the MATLAB® workspace. This variable
is referenced by the Sensor Array tab of the block's dialog panel. Using a variable makes it
easier to share the antenna array's configuration across several blocks. Each column of the output
corresponds to the signal received at each element of the antenna array.
• Receiver Preamp - Adds thermal noise to the received signal.
Signal processing
• Angle to beamform - Constant block specifies to the Phase Shift Beamformer the
beamforming direction.
• Phase Shift Beamformer - Performs narrowband delay-and-sum beamforming on the matrix
passed via the input port X along the direction specified via the input port Ang.
• 2-D Selector - Selects the received signal at one of the antenna elements.
The displays below show the output of a single element (not beamformed) compared to the reference
pulse and the output of the beamformer compared to the reference pulse. When the received signal is
not beamformed, the pulse cannot be detected due to the noise. The display of the beamformer's
output shows that the beamformed signal is much larger than the noise. The output SNR is
approximately 10 times larger than that of the received signal on a single antenna, because a 10-
element array produces an array gain of 10.
17-257
17 Featured Examples
17-258
Conventional and Adaptive Beamformers
The second model illustrates beamforming in the presence of two interference signals arriving from
30 degrees and 50 degrees in azimuth. The interference amplitudes are much larger than the pulse
amplitude. The noise level is set to -50 dBW to highlight only the effect of interference. Phase shift,
MVDR, and LCMV beamformers are applied to the received signal and their results are compared.
17-259
17 Featured Examples
Several new blocks are added to the blocks used in the previous model:
• Random Source - Two blocks generate Gaussian vectors to simulate the interference signals
(labeled Interference1 and Interference2)
• Concatenate - Concatenates the outputs of the Random Source and the Rectangular blocks
into a 3 column matrix.
• Signal direction - Constant block specifies the incident directions of the pulses and
interference signals to the Narrowband Rx Array block.
• MVDR Beamformer - Performs MVDR beamforming along the specified direction.
• LCMV Beamformer - Performs LCMV beamforming with the specified constraint matrix and
desired response.
The helper function used for this example is helperslexBeamformerParam. To open the function from
the model, click on Modify Simulation Parameters block. The pulse, interference signal and
beamforming directions can also be changed at runtime by changing the angles on the Signal
directions and the Angle to beamform blocks without stopping the simulation.
The figure below shows the output of the phased-shift beamformer. It is not able to detect the pulses
because the interference signals are much stronger than the pulse signal.
17-260
Conventional and Adaptive Beamformers
The next figure shows the output of the MVDR beamformer. An MVDR beamformer preserves the
signal arriving along a desired direction, while trying to suppress signals coming from other
directions. In this example, both interference signals were suppressed and the pulse at 45 degrees
azimuth was preserved.
17-261
17 Featured Examples
The MVDR beamformer is, however, very sensitive to the beamforming direction. If the target signal
is received along a direction slightly different from the desired direction, the MVDR beamformer
suppresses it. This occurs because the MVDR beamformer treats all the signals, except the one along
the desired direction, as undesired interferences. This effect is sometimes referred to as "signal self-
nulling". The following display shows what happens if we change the target signal's direction in the
Signal directions block to 43 instead of 45. Notice how the received pulses have been
suppressed as compared to the reference pulse.
17-262
Conventional and Adaptive Beamformers
You can use an LCMV beamformer to prevent signal self-nulling by broadening the region
surrounding the signal direction where you want to preserve the signal. In this example, three
separate but closely-spaced constraints are imposed that preserve the response in directions
corresponding to 43, 45, and 47 degrees in azimuth. The desired responses in these directions are all
set to one. As shown in the figure below, the pulse is preserved.
17-263
17 Featured Examples
17-264
Direction of Arrival with Beamscan and MVDR
This example simulates the reception of two narrowband incident signals on a 10-element uniformly
linear antenna array (ULA). Both signal sources are located at 0 degrees elevation. One signal source
moves from 30 degrees azimuth to 50 degrees and back. The other signal source, with 3 dB less
power, moves in the opposite direction. After simulating the reception of the signals and adding noise,
the beamscan and MVDR spectra are calculated. Because a ULA is symmetric around its axis, a DOA
algorithm cannot uniquely determine azimuth and elevation. Therefore, the results returned by these
DOA estimators are in the form of broadside angles. In this example because the elevation of the
sources is at 0 degrees and the scanning region is between -90 to 90 degrees, the broadside and
azimuth angles are the same.
The model consists of signal simulation followed by DOA processing. The blocks used in the model
are:
Signal simulation
17-265
17 Featured Examples
• Random Source - The blocks labeled Signal1 and Signal2 generate Gaussian vectors to
simulate the transmitted power of the narrowband plane waves. The signals are buffered at 300
samples per frame.
• Concatenate - Concatenates the outputs of the Random Source blocks into a 2 column matrix.
• Signal directions - Signal From Workspace block reads from the workspace, the arrival
directions in degrees of each signals. The block outputs a vector of 2 angles, once per frame.
• Narrowband Rx Array - Simulates the signals received at the ULA. The first input to this block
is a matrix with 2 columns. Each column corresponds to one of the received plane waves. The
second input (Ang) is a 2-element vector that specifies the incident direction at the antenna array
of the corresponding plane waves. The antenna array's configuration is contained in a MATLAB®
workspace variable created by a helper script. This variable is used in the Sensor Array tab of
the dialog. Using a variable makes it easier to share the antenna array's configuration across
several blocks.
DOA processing
• ULA MVDR Spectrum - Calculates the spatial spectrum of the incoming narrowband signals using
the MVDR algorithm. This block also calculates the direction of arrivals of the incoming signals.
• ULA Beamscan Spectrum - Calculates the spatial spectrum of the incoming narrowband signals
by scanning a region using a narrowband conventional beamformer. This block also calculates the
direction of arrivals of the incoming signals.
Several dialog parameters of the model are calculated by the helper function
helperslexBeamscanMVDRDOAParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The beamscan spectrum is updated as the sources move towards each other. The spectrum shows two
wide peaks with different magnitudes moving in opposite directions.
17-266
Direction of Arrival with Beamscan and MVDR
When the sources are approximately 10 degrees apart the peaks merge and the DOA of the signals
are not clearly distinguished. The calculated DOA will start drifting from the actual values, as shown
in the displays. When two signals arrive from directions separated by less than the beamwidth, their
DOA's cannot be resolved accurately using the beamscan method.
17-267
17 Featured Examples
The MVDR spectrum, on the other hand, has a higher resolution. The peaks in the spectrum are
narrower and can be distinguished even when the sources are very close to each other. The MVDR
algorithm is very sensitive to the sources' locations. It tries to filter out signals that are not located
precisely at one of the scan angles specified on the ULA MVDR Spectrum block. The peaks are
greatest when the sources are located at one of the specified scan angles. They will pulsate as the
sources move from one of the specified scan angles to another.
17-268
Direction of Arrival with Beamscan and MVDR
This example replaces the ULA configuration of the previous example with a 10 by 5 uniformly
rectangular antenna array (URA). One signal source moves from 30 degrees azimuth, 10 degrees
elevation to 50 degrees azimuth, -5 degrees elevation. The other signal source, with 3 dB less power,
moves in the opposite direction. Rectangular arrays allow the DOA estimators to determine both the
azimuth and elevation. Matrix viewers are used instead of vector scopes to visualize the 2
dimensional spatial spectrum. Everything else is similar to the previous example.
17-269
17 Featured Examples
The helper function used for this example is helperslex2DBeamscanMVDRDOAParam. To open the
function from the model, click on Modify Simulation Parameters block.
The results are similar to the previous example. The beamscan spectrum is updated as the sources
move towards each other. The spectrum shows two wide peaks with different magnitudes moving in
opposite directions.
17-270
Direction of Arrival with Beamscan and MVDR
When the sources are approximately 10 degrees apart, the peaks merge and the DOA's of the signals
are not clearly distinguished.
17-271
17 Featured Examples
17-272
Direction of Arrival with Beamscan and MVDR
17-273
17 Featured Examples
This example models a monostatic radar with a moving target and a stationary barrage jammer. The
jammer transmits interfering signals through free space to the radar. A 6-element uniform linear
antenna array (ULA) with back baffled elements then receives the reflected pulse from the target as
well as the jammer's interference. A clutter simulator's output is also added to the received signal
before being processed. After adding noise, the signal is buffered into a data cube. In this example
the cube is processed by the ADPCA Canceller at the target's estimated range, azimuth angle and
doppler shift. In practice the ADPCA Canceller will scan several ranges, azimuth angles and doppler
shifts since the speed and position of the target is unknown.
Several blocks in this example need to share the same sensor array configuration. This is done by
assigning a sensor array configuration object to a MATLAB variable and sharing this variable in the
Sensor Array tab of the block's dialog as will be shown later.
In addition to the blocks listed in the “End-to-End Monostatic Radar” on page 17-498 example, there
are:
17-274
Clutter and Jammer Mitigation with STAP
• FreeSpace - Performs two-way propagation of the signal when two-way propagation is selected
on the block's dialog panel. This mode allows the use of one block instead of two to model the
transmitted and reflected propagation paths of the signal.
• Jammer - Generates a barrage jamming signal. This subsystem also includes a Platform to model
the speed and position of the jammer which are needed by the Freespace blocks. The position is
also needed to calculate the angle between the target and the jammer.
• Selector - Selects the target's angle from the Range Angle block. This angle is used by the
Narrowband Tx Array block.
• Constant Gamma Clutter - Generates clutter with a gamma value of -15 dB. Such a gamma
value can be used to model terrain covered by woods.
• Radar Platform - Updates the position and velocity of the radar.
STAP
17-275
17 Featured Examples
• Angle Doppler Slicer - Slices the data cube along the dimension specified by the dialog
parameter. This example examines the angle-doppler slice of the cube at the estimated range.
• Visualization - This subsystems displays the clutter interference in the time domain, the angle
Doppler response of the received data, the output of the ADPCA Canceller as well as the weights.
Several dialog parameters of the model are calculated by the helper function helperslexSTAPParam.
To open the function from the model, click on Modify Simulation Parameters block. This
function is executed once when the model is loaded. It exports to the workspace a structure whose
fields are referenced by the dialogs. To modify any parameters, either change the values in the
structure at the command prompt or edit the helper function and rerun it to update the parameter
structure.
Displays from different stages of the simulation are shown below. The first figure below shows how
the signal received at the antenna array is dominated by the clutter return. Because the radar is
located 1000 meters above the surface, the clutter returns from the ground start at 1000 meters.
17-276
Clutter and Jammer Mitigation with STAP
The figure below shows the angle-Doppler response of the return for the estimated range bin. It
presents the clutter as a function of angle and Doppler. The clutter return looks like a diagonal line in
angle-Doppler space. Such a line is often referred to as clutter ridge. The received jammer signal is
white noise, spread over the entire Doppler spectrum at approximately 60 degrees.
As you can see in the next figure, the weights of the ADPCA Canceller produce a deep null along the
clutter ridge and also in the direction of the jammer.
17-277
17 Featured Examples
The figure below displays the return at the output of the ADPCA Canceller, clearly showing the
target's range at 1750 meters. The barrage jammer and clutter have been filtered out.
17-278
Constant False Alarm Rate (CFAR) Detection
Introduction
One important task a radar system performs is target detection. The detection itself is fairly
straightforward. It compares the signal to a threshold. Therefore, the real work on detection is
coming up with an appropriate threshold. In general, the threshold is a function of both the
probability of detection and the probability of false alarm.
In many phased array systems, because of the cost associated with a false detection, it is desirable to
have a detection threshold that not only maximizes the probability of detection but also keeps the
probability of false alarm below a preset level.
There is extensive literature on how to determine the detection threshold. Readers might be
interested in the “Signal Detection in White Gaussian Noise” on page 17-307 and “Signal Detection
Using Multiple Samples” on page 17-314 examples for some well known results. However, all these
classical results are based on theoretical probabilities and are limited to white Gaussian noise with
known variance (power). In real applications, the noise is often colored and its power is unknown.
CFAR technology addresses these issues. In CFAR, when the detection is needed for a given cell,
often termed as the cell under test (CUT), the noise power is estimated from neighboring cells. Then
the detection threshold, T , is given by
T = αPn
where Pn is the noise power estimate and α is a scaling factor called the threshold factor.
From the equation, it is clear that the threshold adapts to the data. It can be shown that with the
appropriate threshold factor, α, the resulting probability of false alarm can be kept at a constant,
hence the name CFAR.
The cell averaging CFAR detector is probably the most widely used CFAR detector. It is also used as a
baseline comparison for other CFAR techniques. In a cell averaging CFAR detector, noise samples are
extracted from both leading and lagging cells (called training cells) around the CUT. The noise
estimate can be computed as [1]
N
1
N m∑
Pn = xm
=1
where N is the number of training cells and xm is the sample in each training cell. If xm happens to be
the output of a square law detector, then Pn represents the estimated noise power. In general, the
number of leading and lagging training cells are the same. Guard cells are placed adjacent to the
CUT, both and leading and lagging it. The purpose of these guard cells is to avoid signal components
from leaking into the training cell, which could adversely affect the noise estimate.
The following figure shows the relation among these cells for the 1-D case.
17-279
17 Featured Examples
With the above cell averaging CFAR detector, assuming the data passed into the detector is from a
single pulse, i.e., no pulse integration involved, the threshold factor can be written as [1]
−1/N
α = N(Pf a − 1)
In the rest of this example, we show how to use Phased Array System Toolbox to perform a cell
averaging CFAR detection. For simplicity and without losing any generality, we still assume that the
noise is white Gaussian. This enables the comparison between the CFAR and classical detection
theory.
In this detector we use 20 training cells and 2 guard cells in total. This means that there are 10
training cells and 1 guard cell on each side of the CUT. As mentioned above, if we assume that the
signal is from a square law detector with no pulse integration, the threshold can be calculated based
on the number of training cells and the desired probability of false alarm. Assuming the desired false
alarm rate is 0.001, we can configure the CFAR detector as follows so that this calculation can be
carried out.
exp_pfa = 1e-3;
cfar.ThresholdFactor = 'Auto';
cfar.ProbabilityFalseAlarm = exp_pfa;
cfar =
phased.CFARDetector with properties:
Method: 'CA'
NumGuardCells: 2
NumTrainingCells: 20
ThresholdFactor: 'Auto'
ProbabilityFalseAlarm: 1.0000e-03
OutputFormat: 'CUT result'
17-280
Constant False Alarm Rate (CFAR) Detection
ThresholdOutputPort: false
NoisePowerOutputPort: false
We now simulate the input data. Since the focus is to show that the CFAR detector can keep the false
alarm rate under a certain value, we just simulate the noise samples in those cells. Here are the
settings:
• The data sequence is 23 samples long, and the CUT is cell 12. This leaves 10 training cells and 1
guard cell on each side of the CUT.
• The false alarm rate is calculated using 100 thousand Monte Carlo trials
rs = RandStream('mt19937ar','Seed',2010);
npower = db2pow(-10); % Assume 10dB SNR ratio
Ntrials = 1e5;
Ncells = 23;
CUTIdx = 12;
To perform the detection, pass the data through the detector. In this example, there is only one CUT,
so the output is a logical vector containing the detection result for all the trials. If the result is true, it
means that a target is present in the corresponding trial. In our example, all detections are false
alarms because we are only passing in noise. The resulting false alarm rate can then be calculated
based on the number of false alarms and the number of trials.
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
act_pfa = 9.4000e-04
The result shows that the resulting probability of false alarm is below 0.001, just as we specified.
As explained in the earlier part of this example, there are only a few cases in which the CFAR
detector can automatically compute the appropriate threshold factor. For example, using the previous
scenario, if we employ a 10-pulses noncoherent integration before the data goes into the detector, the
automatic threshold can no longer provide the desired false alarm rate.
act_pfa = 0
One may be puzzled why we think a resulting false alarm rate of 0 is worse than a false alarm rate of
0.001. After all, isn't a false alarm rate of 0 a great thing? The answer to this question lies in the fact
that when the probability of false alarm is decreased, so is the probability of detection. In this case,
17-281
17 Featured Examples
because the true false alarm rate is far below the allowed value, the detection threshold is set too
high. The same probability of detection can be achieved with our desired probability of false alarm at
lower cost; for example, with lower transmitter power.
In most cases, the threshold factor needs to be estimated based on the specific environment and
system configuration. We can configure the CFAR detector to use a custom threshold factor, as shown
below.
release(cfar);
cfar.ThresholdFactor = 'Custom';
Continuing with the pulse integration example and using empirical data, we found that we can use a
custom threshold factor of 2.35 to achieve the desired false alarm rate. Using this threshold, we see
that the resulting false alarm rate matches the expected value.
cfar.CustomThresholdFactor = 2.35;
x_detected = cfar(xn,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
act_pfa = 9.6000e-04
A CFAR detection occurs when the input signal level in a cell exceeds the threshold level. The
threshold level for each cell depends on the threshold factor and the noise power in that derived from
training cells. To maintain a constant false alarm rate, the detection threshold will increase or
decrease in proportion to the noise power in the training cells. Configure the CFAR detector to output
the threshold used for each detection using the ThresholdOutputPort property. Use an automatic
threshold factor and 200 training cells.
release(cfar);
cfar.ThresholdOutputPort = true;
cfar.ThresholdFactor = 'Auto';
cfar.NumTrainingCells = 200;
rs = RandStream('mt19937ar','Seed',2010);
Npoints = 1e4;
rsamp = randn(rs,Npoints,1)+1i*randn(rs,Npoints,1);
ramp = linspace(1,10,Npoints)';
xRamp = abs(sqrt(npower*ramp./2).*rsamp).^2;
[x_detected,th] = cfar(xRamp,1:length(xRamp));
plot(1:length(xRamp),xRamp,1:length(xRamp),th,...
find(x_detected),xRamp(x_detected),'o')
legend('Signal','Threshold','Detections','Location','Northwest')
xlabel('Time Index')
ylabel('Level')
17-282
Constant False Alarm Rate (CFAR) Detection
Here, the threshold increases with the noise power of the signal to maintain the constant false alarm
rate. Detections occur where the signal level exceeds the threshold.
In this section, we compare the performance of a CFAR detector with the classical detection theory
using the Neyman-Pearson principle. Returning to the first example and assuming the true noise
power is known, the theoretical threshold can be calculated as
T_ideal = npower*db2pow(npwgnthresh(exp_pfa));
The false alarm rate of this classical Neyman-Pearson detector can be calculated using this
theoretical threshold.
act_Pfa_np = sum(x(CUTIdx,:)>T_ideal)/Ntrials
act_Pfa_np = 9.5000e-04
Because we know the noise power, classical detection theory also produces the desired false alarm
rate. The false alarm rate achieved by the CFAR detector is similar.
release(cfar);
cfar.ThresholdOutputPort = false;
cfar.NumTrainingCells = 20;
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
act_pfa = 9.4000e-04
17-283
17 Featured Examples
Next, assume that both detectors are deployed to the field and that the noise power is 1 dB more than
expected. In this case, if we use the theoretical threshold, the resulting probability of false alarm is
four times more than what we desire.
act_Pfa_np = 0.0041
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
act_pfa = 0.0011
Hence, the CFAR detector is robust to noise power uncertainty and better suited to field applications.
Finally, use a CFAR detection in the presence of colored noise. We first apply the classical detection
threshold to the data.
npower = db2pow(-10);
fcoeff = maxflat(10,'sym',0.2);
x = abs(sqrt(npower/2)*filter(fcoeff,1,rsamp)).^2; % colored noise
act_Pfa_np = sum(x(CUTIdx,:)>T_ideal)/Ntrials
act_Pfa_np = 0
Note that the resulting false alarm rate cannot meet the requirement. However, using the CFAR
detector with a custom threshold factor, we can obtain the desired false alarm rate.
release(cfar);
cfar.ThresholdFactor = 'Custom';
cfar.CustomThresholdFactor = 12.85;
x_detected = cfar(x,CUTIdx);
act_pfa = sum(x_detected)/Ntrials
act_pfa = 0.0010
In the previous sections, the noise estimate was computed from training cells leading and lagging the
CUT in a single dimension. We can also perform CFAR detection on images. Cells correspond to pixels
in the images, and guard cells and training cells are placed in bands around the CUT. The detection
threshold is computed from cells in the rectangular training band around the CUT.
17-284
Constant False Alarm Rate (CFAR) Detection
In the figure above, the guard band size is [2 2] and the training band size is [4 3]. The size indices
refer to the number of cells on each side of the CUT in the row and columns dimensions, respectively.
The guard band size can also be defined as 2, since the size is the same along row and column
dimensions.
Next, create a two-dimensional CFAR detector. Use a probability of false alarm of 1e-5 and specify a
guard band size of 5 cells and a training band size of 10 cells.
cfar2D = phased.CFARDetector2D('GuardBandSize',5,'TrainingBandSize',10,...
'ProbabilityFalseAlarm',1e-5);
Next, load and plot a range-doppler image. The image includes returns from two stationary targets
and one target moving away from the radar.
[resp,rngGrid,dopGrid] = helperRangeDoppler;
17-285
17 Featured Examples
Use CFAR to search the range-doppler space for objects, and plot a map of the detections. Search
from -10 to 10 kHz and from 1000 to 4000 m. First, define the cells under test for this region.
Compute a detection result for each cell under test. Each pixel in the search region is a cell in this
example. Plot a map of the detection results for the range-doppler image.
detections = cfar2D(resp,CUTIdx);
helperDetectionsMap(resp,rngGrid,dopGrid,rangeIndx,dopplerIndx,detections)
17-286
Constant False Alarm Rate (CFAR) Detection
The three objects are detected. A data cube of range-doppler images over time can likewise be
provided as the input signal to cfar2D, and detections will be calculated in a single step.
Summary
In this example, we presented the basic concepts behind CFAR detectors. In particular, we explored
how to use the Phased Array System Toolbox to perform cell averaging CFAR detection on signals and
range-doppler images. The comparison between the performance offered by a cell averaging CFAR
detector and a detector equipped with the theoretically calculated threshold shows clearly that the
CFAR detector is more suitable for real field applications.
Reference
[1] Mark Richards, Fundamentals of Radar Signal Processing, McGraw Hill, 2005
17-287
17 Featured Examples
ROC curves are often used to assess the performance of a radar or sonar detector. ROC curves are
plots of the probability of detection (Pd) vs. the probability of false alarm (Pfa) for a given signal-to-
noise ratio (SNR).
Introduction
The probability of detection (Pd) is the probability of saying that "1" is true given that event "1"
occurred. The probability of false alarm (Pfa) is the probability of saying that "1" is true given that the
"0" event occurred. In applications such as sonar and radar, the "1" event indicates that a target is
present, and the "0" event indicates that a target is not present.
A detector's performance is measured by its ability to achieve a certain probability of detection and
probability of false alarm for a given SNR. Examining a detector's ROC curves provides insight into
its performance. We can use the rocsnr function to calculate and plot ROC curves.
Given an SNR value, you can calculate the Pd and Pfa values that a linear or square-law detector can
achieve using a single pulse. Assuming we have an SNR value of 8 dB and our requirements dictate a
Pfa value of at most 1%, what value of Pd can the detector achieve? We can use the rocsnr function to
calculate the Pd and Pfa values and then determine what value of Pd corresponds to Pfa = 0.01. Note
that by default the rocsnr function assumes coherent detection.
[Pd,Pfa] = rocsnr(8);
Using the index determined above we can find the Pd value that corresponds to Pfa = 0.01.
Pd(idx)
ans = 0.8899
One feature of the rocsnr function is that you can specify a vector of SNR values and rocsnr
calculates the ROC curve for each of these SNR values. Instead of individually calculating Pd and Pfa
values for a given SNR, we can view the results in a plot of ROC curves. The rocsnr function plots the
ROC curves by default if no output arguments are specified. Calling the rocsnr function with an input
vector of four SNR values and no output arguments produces a plot of the ROC curves.
SNRvals = [2 4 8 9.4];
rocsnr(SNRvals);
17-288
Detector Performance Analysis Using ROC Curves
In the plot we can select the data cursor button in the toolbar (or in the Tools menu) and then select
the SNR = 8 dB curve at the point where Pd = 0.9 to verify that Pfa is approximately 0.01.
One way to improve a detector's performance is to average over several pulses. This is particularly
useful in cases where the signal of interest is known and occurs in additive complex white noise.
Although this still applies to both linear and square-law detectors, the result for square-law detectors
could be off by about 0.2 dB. Let's continue our example by assuming an SNR of 8 dB and averaging
over two pulses.
rocsnr(8,'NumPulses',2);
17-289
17 Featured Examples
By inspecting the plot we can see that averaging over two pulses resulted in a higher probability of
detection for a given false alarm rate. With an SNR of 8 dB and averaging over two pulses, you can
constrain the probability of false alarm to be at most 0.0001 and achieve a probability of detection of
0.9. Recall that for a single pulse, we had to allow the probability of false alarm to be as much as 1%
to achieve the same probability of detection.
Noncoherent Detector
To this point, we have assumed we were dealing with a known signal in complex white Gaussian
noise. The rocsnr function by default assumes a coherent detector. To analyze the performance of a
detector for the case where the signal is known except for the phase, you can specify a noncoherent
detector. Using the same SNR values as before, let's analyze the performance of a noncoherent
detector.
rocsnr(SNRvals,'SignalType','NonfluctuatingNoncoherent');
17-290
Detector Performance Analysis Using ROC Curves
Focus on the ROC curve corresponding to an SNR of 8dB. By inspecting the graph with the data
cursor, you can see that to achieve a probability of detection of 0.9, you must tolerate a false-alarm
probability of up to 0.05. Without using phase information, we need a higher SNR to achieve the
same Pd for a given Pfa. For noncoherent linear detectors, we can use Albersheim's equation to
determine what value of SNR will achieve our desired Pd and Pfa.
SNR_valdB = 9.5027
Plotting the ROC curve for the SNR value approximated by Albersheim's equation, we can see that
the detector will achieve Pd = 0.9 and Pfa = 0.01. Note that the Albersheim's technique applies only
to noncoherent detectors.
rocsnr(SNR_valdB,'SignalType','NonfluctuatingNoncoherent');
17-291
17 Featured Examples
All the discussions above assume that the target is nonfluctuating, which means that the target's
statistical characteristics do not change over time. However, in real scenarios, targets can accelerate
and decelerate as well as roll and pitch. These factors cause the target's radar cross section (RCS) to
vary over time. A set of statistical models called Swerling models are often used to describe the
random variation in target RCS.
There are four Swerling models, namely Swerling 1-4. The nonfluctuating target is often termed
either Swerling 0 or Swerling 5. Each Swerling model describes how a target's RCS varies over time
and the probability distribution of the variation.
Because the target RCS is varying, the ROC curves for fluctuating targets are not the same as the
nonfluctuating ones. In addition, because Swerling targets add random phase into the received
signal, it is harder to use a coherent detector for a Swerling target. Therefore, noncoherent detection
techniques are often used for Swerling targets.
Let us now compare the ROC curves for a nonfluctuating target and a Swerling 1 target. In particular,
we want to explore what the SNR requirements are for both situations if we want to achieve the same
Pd and Pfa. For such a comparison, it is often easy to plot the ROC curve as Pd against SNR with
varying Pfa. We can use the rocpfa function to plot ROC curve in this form.
Let us assume that we are doing noncoherent detection with 10 integrated pulses, with the desired
Pfa being at most 1e-8. We first plot the ROC curve for a nonfluctuating target.
rocpfa(1e-8,'NumPulses',10,'SignalType','NonfluctuatingNoncoherent')
17-292
Detector Performance Analysis Using ROC Curves
We then plot the ROC curve for a Swerling 1 target for comparison.
rocpfa(1e-8,'NumPulses',10,'SignalType','Swerling1')
17-293
17 Featured Examples
From the figures, we can see that for a Pd of 0.9, we require an SNR of about 6 dB if the target is
nonfluctuating. However, if the target is a Swerling case 1 model, the required SNR jumps to more
than 14 dB, an 8 dB difference. This will greatly impact the design of the system.
As in the case of nonfluctuating targets, we have approximation equations to help determine the
required SNR without having to plot all the curves. The equation used for fluctuating targets is
Shnidman's equation. For the scenario we used to plot the ROC curves, the SNR requirements can be
derived using the shnidman function.
snr_sw1_db = 14.7131
% Swerling case 1
The calculated SNR requirement matches the value derived from the curve.
Summary
ROC curves are useful for analyzing detector performance, both for coherent and noncoherent
systems. We used the rocsnr function to analyze the effectiveness of a linear detector for various SNR
values. We also reviewed the improvement in detector performance achieved by averaging multiple
samples. Lastly we showed how we can use the rocsnr and rocpfa functions to analyze detector
performance when using a noncoherent detector for both nonfluctuating and fluctuating targets.
17-294
Doppler Estimation
Doppler Estimation
This example shows a monostatic pulse radar detecting the radial velocity of moving targets at
specific ranges. The speed is derived from the Doppler shift caused by the moving targets. We first
identify the existence of a target at a given range and then use Doppler processing to determine the
radial velocity of the target at that range.
First, we define a radar system. Since the focus of this example is on Doppler processing, we use the
radar system built in the example “Designing a Basic Monostatic Pulse Radar” on page 17-449.
Readers are encouraged to explore the details of radar system design through that example.
load BasicMonostaticRadarExampleData;
System Simulation
Targets
Doppler processing exploits the Doppler shift caused by the moving target. We now define three
targets by specifying their positions, radar cross sections (RCS), and velocities.
tgtpos = [[1200; 1600; 0],[3543.63; 0; 0],[1600; 0; 1200]];
tgtvel = [[60; 80; 0],[0;0;0],[0; 100; 0]];
tgtmotion = phased.Platform('InitialPosition',tgtpos,'Velocity',tgtvel);
Note that the first and third targets are both located at a range of 2000 m and are both traveling at a
speed of 100 m/s. The difference is that the first target is moving along the radial direction, while the
third target is moving in the tangential direction. The second target is not moving.
Environment
We also need to setup the propagation environment for each target. Since we are using a monostatic
radar, we use the two way propagation model.
fs = waveform.SampleRate;
channel = phased.FreeSpace(...
'SampleRate',fs,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
Signal Synthesis
With the radar system, the environment, and the targets defined, we can now simulate the received
signal as echoes reflected from the targets. The simulated data is a data matrix with the fast time
(time within each pulse) along each column and the slow time (time between pulses) along each row.
We need to set the seed for noise generation at the receiver so that we can reproduce the same
results.
receiver.SeedSource = 'Property';
receiver.Seed = 2009;
17-295
17 Featured Examples
prf = waveform.PRF;
num_pulse_int = 10;
fast_time_grid = unigrid(0,1/fs,1/prf,'[)');
slow_time_grid = (0:num_pulse_int-1)/prf;
for m = 1:num_pulse_int
Doppler Estimation
Range Detection
To be able to estimate the Doppler shift of the targets, we first need to locate the targets through
range detection. Because the Doppler shift spreads the signal power into both I and Q channels, we
need to rely on the signal energy to do the detection. This means that we use noncoherent detection
schemes.
The detection process is described in detail in the aforementioned example so we simply perform the
necessary steps here to estimate the target ranges.
% calculate initial threshold
pfa = 1e-6;
% in loaded system, noise bandwidth is half of the sample rate
noise_bw = receiver.SampleRate/2;
npower = noisepow(noise_bw,...
receiver.NoiseFigure,receiver.ReferenceTemperature);
threshold = npower * db2pow(npwgnthresh(pfa,num_pulse_int,'noncoherent'));
17-296
Doppler Estimation
tvg = phased.TimeVaryingGain(...
'RangeLoss',2*fspl(range_gates,lambda),...
'ReferenceLoss',2*fspl(prop_speed/(prf*2),lambda));
rxpulses = tvg(rxpulses);
range_estimates = 1×2
2000 3550
These estimates suggest the presence of targets in the range of 2000 m and 3550 m.
Doppler Spectrum
Once we successfully estimated the ranges of the targets, we can then estimate the Doppler
information for each target.
Doppler estimation is essentially a spectrum estimation process. Therefore, the first step in Doppler
processing is to generate the Doppler spectrum from the received signal.
The received signal after the matched filter is a matrix whose columns correspond to received pulses.
Unlike range estimation, Doppler processing processes the data across the pulses (slow time), which
is along the rows of the data matrix. Since we are using 10 pulses, there are 10 samples available for
Doppler processing. Because there is one sample from each pulse, the sampling frequency for the
Doppler samples is the pulse repetition frequency (PRF).
As predicted by the Fourier theory, the maximum unambiguous Doppler shift a pulse radar system
can detect is half of its PRF. This also translates to the maximum unambiguous speed a radar system
can detect. In addition, the number of pulses determines the resolution in the Doppler spectrum,
which determines the resolution of the speed estimates.
max_speed = dop2speed(prf/2,lambda)/2
max_speed = 224.6888
speed_res = 2*max_speed/num_pulse_int
speed_res = 44.9378
As shown in the calculation above, in this example, the maximum detectable speed is 225m/s, either
approaching (-225) or departing (+225). The resulting Doppler resolution is about 45 m/s, which
17-297
17 Featured Examples
means that the two speeds must be at least 45 m/s apart to be separable in the Doppler spectrum. To
improve the ability to discriminate between different target speeds, more pulses are needed.
However, the number of pulses available is also limited by the radial velocity of the target. Since the
Doppler processing is limited to a given range, all pulses used in the processing have to be collected
before the target moves from one range bin to the next.
Because the number of Doppler samples are in general limited, it is common to zero pad the
sequence to interpolate the resulting spectrum. This will not improve the resolution of the resulting
spectrum, but can improve the estimation of the locations of the peaks in the spectrum.
The Doppler spectrum can be generated using a periodogram. We zero pad the slow time sequence to
256 points.
num_range_detected = numel(range_estimates);
[p1, f1] = periodogram(rxpulses(range_detect(1),:).',[],256,prf, ...
'power','centered');
[p2, f2] = periodogram(rxpulses(range_detect(2),:).',[],256,prf, ...
'power','centered');
The speed corresponding to each sample in the spectrum can then be calculated. Note that we need
to take into consideration of the round trip effect.
speed_vec = dop2speed(f1,lambda)/2;
Doppler Estimation
To estimate the Doppler shift associated with each target, we need to find the locations of the peaks
in each Doppler spectrum. In this example, the targets are present at two different ranges, so the
estimation process needs to be repeated for each range.
Let's first plot the Doppler spectrum corresponding to the range of 2000 meters.
periodogram(rxpulses(range_detect(1),:).',[],256,prf,'power','centered');
17-298
Doppler Estimation
Note that we are only interested in detecting the peaks, so the spectrum values themselves are not
critical. From the plot of Doppler spectrum, we notice that 5 dB below the maximum peak is a good
threshold. Therefore, we use -5 as our threshold on the normalized Doppler spectrum.
spectrum_data = p1/max(p1);
[~,dop_detect1] = findpeaks(pow2db(spectrum_data),'MinPeakHeight',-5);
sp1 = speed_vec(dop_detect1)
sp1 = 2×1
-103.5675
3.5108
The results show that there are two targets at the 2000 m range: one with a velocity of 3.5 m/s and
the other with -104 m/s. The value -104 m/s can be easily associated with the first target, since the
first target is departing at a radial velocity of 100 m/s, which, given the Doppler resolution of this
example, is very close to the estimated value. The value 3.5 m/s requires more explanation. Since the
third target is moving along the tangential direction, there is no velocity component in the radial
direction. Therefore, the radar cannot detect the Doppler shift of the third target. The true radial
velocity of the third target, hence, is 0 m/s and the estimate of 3.5 m/s is very close to the true value.
Note that these two targets cannot be discerned using only range estimation because their range
values are the same.
The same operations are then applied to the data corresponding to the range of 3550 meters.
17-299
17 Featured Examples
periodogram(rxpulses(range_detect(2),:).',[],256,prf,'power','centered');
spectrum_data = p2/max(p2);
[~,dop_detect2] = findpeaks(pow2db(spectrum_data),'MinPeakHeight',-5);
sp2 = speed_vec(dop_detect2)
sp2 = 0
This result shows an estimated speed of 0 m/s, which matches the fact that the target at this range is
not moving.
Summary
This example showed a simple way to estimate the radial speed of moving targets using a pulse radar
system. We generated the Doppler spectrum from the received signal and estimated the peak
locations from the spectrum. These peak locations correspond to the target's radial speed. The
limitations of the Doppler processing are also discussed in the example.
17-300
Range Estimation Using Stretch Processing
Introduction
Linear FM waveform is a popular choice in modern radar systems because it can achieve high range
resolution by sweeping through a wide bandwidth. However, when the bandwidth is on the order of
hundreds of megahertz, or even gigahertz, it becomes difficult to perform matched filtering or pulse
compression in the digital domain because high-quality A/D converters are hard to find at such data
rates.
Stretch processing, sometimes also referred to as deramp, is a technique that can be used in such
situations. Stretch processing is performed in the analog domain.
The received signal is first mixed with a replica of the transmitted pulse. Note that the replica
matches the return from the reference range. Once mixed, the resulting signal contains a frequency
component that corresponds to the range offset measured from this reference range. Hence, the
exact range can be estimated by performing a spectral analysis on the signal at the output of the
mixer.
In addition, instead of processing the entire range span covered by the pulse, the processing focuses
on a small window around a predefined reference range. Because of the limited range span, the
output data of the stretch processor can be sampled at a lower rate, relaxing the bandwidth
requirement for A/D converters
The following sections show an example of range estimation using stretch processing.
Simulation Setup
The radar system in this example uses a linear FM waveform with a 3 MHz sweeping bandwidth. The
waveform can be used to achieve a range resolution of 50 m and a maximum unambiguous range of 8
km. The sample rate is set to 6 MHz, i.e., twice the sweeping bandwidth. For more information about
the radar system, see “Waveform Design to Improve Performance of an Existing Radar System” on
page 17-165.
Three targets are located at 2000.66, 6532.63, and 6845.04 meters from the radar, respectively. Ten
pulses are simulated at the receiver. These pulses contain echoes from the targets.
A time frequency plot of the received pulse is shown below. A coherent pulse integration is done
before the plot to improve the signal-to-noise ratio (SNR). In the figure, the return from the first
target can be clearly seen between 14 and 21 ms while the return from the second and third targets
are much weaker, appearing after 45 ms.
helperStretchSignalSpectrogram(pulsint(rx_pulses,'coherent'),fs,...
8,4,'Received Signal');
17-301
17 Featured Examples
Stretch Processing
To perform stretch processing, first determine a reference range. In this example, the goal is to
search targets around 6700 m away from the radar, in a 500-meter window. A stretch processor can
be formed using the waveform, the desired reference range and the range span.
refrng = 6700;
rngspan = 500;
prop_speed = physconst('lightspeed');
stretchproc = getStretchProcessor(waveform,refrng,rngspan,prop_speed)
stretchproc =
phased.StretchProcessor with properties:
SampleRate: 5.9958e+06
PulseWidth: 6.6713e-06
PRFSource: 'Property'
PRF: 1.8737e+04
SweepSlope: 4.4938e+11
SweepInterval: 'Positive'
PropagationSpeed: 299792458
ReferenceRange: 6700
RangeSpan: 500
17-302
Range Estimation Using Stretch Processing
y = pulsint(y_stretch,'coherent');
The spectrogram of the signal after stretch processing is shown below. Note that the second and third
target echoes no longer appear as a ramp in the plot. Instead, their time-frequency signatures appear
at constant frequencies, around 0.5 and -0.5 MHz. Hence, the signal is deramped. In addition, there
is no return present from the first target. In fact, any signal outside the ranges of interest has been
suppressed. This is because the stretch processor only allows target returns within the range window
to pass. This process is often referred to as range gating in a real system.
helperStretchSignalSpectrogram(y,fs,16,12,'Deramped Signal');
Range Estimation
periodogram(y,[],2048,stretchproc.SampleRate,'centered');
17-303
17 Featured Examples
From the figure, it is clear that there are two dominant frequency components in the deramped
signal, which correspond to two targets. The frequencies of these peaks can be used to determine the
true range values of these targets.
[p, f] = periodogram(y,[],2048,stretchproc.SampleRate,'centered');
[~,rngidx] = findpeaks(pow2db(p/max(p)),'MinPeakHeight',-5);
rngfreq = f(rngidx);
re = stretchfreq2rng(rngfreq,...
stretchproc.SweepSlope,stretchproc.ReferenceRange,prop_speed)
re = 2×1
103 ×
6.8514
6.5174
The estimated ranges are 6518 and 6852 meters, matching the true ranges of 6533 and 6845 meters.
As mentioned in the introduction section, an attractive feature of stretch processing is that it reduces
the bandwidth requirement for successive processing stages. In this example, the range span of
interest is 500 meters. The required bandwidth for the successive processing stages can be computed
as
17-304
Range Estimation Using Stretch Processing
rngspan_bw = ...
2*rngspan/prop_speed*waveform.SweepBandwidth/waveform.PulseWidth
rngspan_bw = 1.4990e+06
Following the same design rule as in the original system, where twice the bandwidth is used as the
sampling frequency, the new required sampling frequency becomes
fs_required = 2*rngspan_bw
fs_required = 2.9979e+06
dec_factor = round(fs/fs_required)
dec_factor = 2
The resulting decimator factor is 2. This means that after performing stretch processing in the analog
domain, signals can be sampled at only half of the sampling frequency compared to the case where
the stretch processing is not used. Thus, the requirement on the A/D converter has been relaxed.
To verify this benefit in simulation, the next section shows that the same ranges can be estimated
with the signal decimated after stretch processing.
% Decimate
y_stretch = decimator(y_stretch);
y = pulsint(y_stretch,'coherent');
[p, f] = periodogram(y,[],2048,fs_required,'centered');
rng_bin = stretchfreq2rng(f,...
stretchproc.SweepSlope,stretchproc.ReferenceRange,prop_speed);
plot(rng_bin,pow2db(p));
xlabel('Range (m)'); ylabel('Power/frequency (dB/Hz)'); grid on;
title('Periodogram Power Spectral Density Estimate');
17-305
17 Featured Examples
[~,rngidx] = findpeaks(pow2db(p/max(p)),'MinPeakHeight',-5);
re = rng_bin(rngidx)
re = 2×1
103 ×
6.8504
6.5232
The true range values are 6533 and 6845 meters. Without decimation, the range estimates are 6518
and 6852 meters. With decimation, the range estimates are 6523 and 6851 meters. Therefore, the
range estimation yields the same result with roughly only half of the computations compared to the
nondecimated case.
Summary
This example shows how to use stretch processing to estimate the target range when a linear FM
waveform is used. It also shows that stretch processing reduces the bandwidth requirement.
Reference
17-306
Signal Detection in White Gaussian Noise
Overview
There are many different kinds of detectors available for use in different applications. A few of the
most popular ones are the Bayesian detector, maximum likelihood (ML) detector and Neyman-
Pearson (NP) detector. In radar and sonar applications, NP is the most popular choice since it can
ensure the probability of false alarm (Pfa) to be at a certain level.
In this example, we limit our discussion to the scenario where the signal is deterministic and the
noise is white and Gaussian distributed. Both signal and noise are complex.
The example discusses the following topics and their interrelations: coherent detection, noncoherent
detection, matched filtering and receiver operating characteristic (ROC) curves.
where s(t) is the signal and n(t) is the noise. Without losing the generality, we assume that the signal
power is equal to 1 watt and the noise power is determined accordingly based on the signal to noise
ratio (SNR). For example, for an SNR of 10 dB, the noise power, i.e., noise variance will be 0.1 watt.
Matched Filter
A matched filter is often used at the receiver front end to enhance the SNR. From the discrete signal
point of view, matched filter coefficients are simply given by the complex conjugated reversed signal
samples.
When dealing with complex signals and noises, there are two types of receivers. The first kind is a
coherent receiver, which assumes that both the amplitude and phase of the received signal are
known. This results in a perfect match between the matched filter coefficients and the signal s.
Therefore, the matched filter coefficients can be considered as the conjugate of s. The matched filter
operation can then be modeled as
2
y = s*x = s*(s + n) = s + s*n .
Note that although the general output y is still a complex quantity, the signal is completely
2
characterized by s , which is a real number and contained in the real part of y. Hence, the detector
following the matched filter in a coherent receiver normally uses only the real part of the received
signal. Such a receiver can normally provide the best performance. However, the coherent receiver is
vulnerable to phase errors. In addition, a coherent receiver also requires additional hardware to
perform the phase detection. For a noncoherent receiver, the received signal is modeled as a copy of
the original signal with a random phase error. With a noncoherent received signal, the detection after
the matched filter is normally based on the power or magnitude of the signal since you need both real
and imaginary parts to completely define the signal.
17-307
17 Featured Examples
Detector
J = Pd + g(Pf a − a),
i.e., to maximize the probability of the detection, Pd, while limiting the probability of false alarm, Pfa
at a specified level a. The variable g in the equation is the Lagrange multiplier. The NP detector can
be formed as a likelihood ratio test (LRT) as follows:
H1
py(y H1) >
py(y H0) < Th .
H0
In this particular NP situation, since the false alarm is caused by the noise alone, the threshold Th is
determined by the noise to ensure the fixed Pfa. The general form of the LRT shown above is often
difficult to evaluate. In real applications, we often use an easy to compute quantity from the signal,
i.e., sufficient statistic, to replace the ratio of two probability density functions. For example, the
sufficient statistics, z may be as simple as
z= y ,
H1
z >
< T .
H0
T is the threshold to the sufficient statistic z, acting just like the threshold Th to the LRT. Therefore,
the threshold is not only related to the probability distributions, but also depends on the choice of
sufficient statistic.
We will first explore an example of detecting a signal in noise using just one sample.
Assume the signal is a unit power sample and the SNR is 3 dB. Using a 100000-trial Monte-Carlo
simulation, we generate the signal and noise as
x = s + n;
17-308
Signal Detection in White Gaussian Noise
The matched filter in this case is trivial, since the signal itself is a unit sample.
mf = 1;
In this case, the matched filter gain is 1, therefore, there is no SNR gain.
Now we do the detection and examine the performance of the detector. For a coherent receiver, the
received signal after the matched filter is given by
The sufficient statistic, i.e., the value used to compare to the detection threshold, for a coherent
detector is the real part of the received signal after the matched filter, i.e.,
z = real(y);
Let's assume that the we want to fix the Pfa to 1e-3. Given the sufficient statistic, z, the decision rule
becomes
H1
>
z < T
H0
1 T
Pf a = 1 − erf
2 NM
In the equation, N is the signal power and M is the matched filter gain. Note that T is the threshold of
the signal after the matched filter and NM represents the noise power after the matched filter, so
T
can be considered as the ratio between the signal and noise magnitude, i.e., it is related to the
NM
signal to noise ratio, SNR. Since SNR is normally referred to as the ratio between the signal and
noise power, considering the units of each quantity in this expression, we can see that
T
= SNR .
NM
Since N and M are fixed once the noise and signal waveform are chosen, there is a correspondence
between T and SNR. Given T is a threshold of the signal, SNR can be considered as a threshold of the
signal to noise ratio. Therefore, the threshold equation can then be rewritten in the form of
1
Pf a = 1 − erf SNR .
2
The required SNR threshold given a complex, white Gaussian noise for the NP detector can be
calculated using the npwgnthresh function as follows:
Pfa = 1e-3;
snrthreshold = db2pow(npwgnthresh(Pfa, 1,'coherent'));
Note that this threshold, although also in the form of an SNR value, is different to the SNR of the
received signal. The threshold SNR is a calculated value based on the desired detection performance,
in this case the Pfa; while the received signal SNR is the physical characteristic of the signal
determined by the propagation environment, the waveform, the transmit power, etc.
17-309
17 Featured Examples
The true threshold T can then be derived from this SNR threshold as
T = NM ⋅ SNR .
mfgain = mf'*mf;
% To match the equation in the text above
% npower - N
% mfgain - M
% snrthreshold - SNR
threshold = sqrt(npower*mfgain*snrthreshold);
The detection is performed by comparing the signal to the threshold. Since the original signal, s, is
presented in the received signal, a successful detection occurs when the received signal passes the
threshold, i.e. z>T. The capability of the detector to detect a target is often measured by the Pd. In a
Monte-Carlo simulation, Pd can be calculated as the ratio between the number of times the signal
passes the threshold and the number of total trials.
Pd = sum(z>threshold)/Ntrial
Pd = 0.1390
On the other hand, a false alarm occurs when the detection shows that there is a target but there
actually isn't one, i.e., the received signal passes the threshold when there is only noise present. The
error probability of the detector to detect a target when there isn't one is given by Pfa.
x = n;
y = mf'*x;
z = real(y);
Pfa = sum(z>threshold)/Ntrial
Pfa = 9.0000e-04
To see the relation among SNR, Pd and Pfa in a graph, we can plot the theoretical ROC curve using
the rocsnr function for a SNR value of 3 dB as
rocsnr(snrdb,'SignalType','NonfluctuatingCoherent','MinPfa',1e-4);
17-310
Signal Detection in White Gaussian Noise
It can be seen from the figure that the measured Pd=0.1390 and Pfa=0.0009 obtained above for the
SNR value of 3 dB match a theoretical point on the ROC curve.
A noncoherent receiver does not know the phase of the received signal, therefore, for target present
case, the signal x contains a phase term and is defined as
% simulate the signal
x = s.*exp(1i*2*pi*rand(rstream,1,Ntrial)) + n;
y = mf'*x;
When the noncoherent receiver is used, the quantity used to compare with the threshold is the power
(or magnitude) of the received signal after the matched filter. In this simulation, we choose the
magnitude as the sufficient statistic.
z = abs(y);
Given our choice of the sufficient statistic z, the threshold is related to Pfa by the equation
T2
Pf a = exp − = exp( − SNR) .
NM
The signal to noise ratio threshold SNR for an NP detector can be calculated using npwgnthresh as
follows:
snrthreshold = db2pow(npwgnthresh(Pfa, 1,'noncoherent'));
17-311
17 Featured Examples
Pd = 0.0583
Note that this resulting Pd is inferior to the performance we get from a coherent receiver.
For the target absent case, the received signal contains only noise. We can calculate the Pfa using
Monte-Carlo simulation as
x = n;
y = mf'*x;
z = abs(y);
Pfa = sum(z>threshold)/Ntrial
Pfa = 9.5000e-04
We can see that the performance of the noncoherent receiver detector is inferior to that of the
coherent receiver.
17-312
Signal Detection in White Gaussian Noise
Summary
This example shows how to simulate and perform different detection techniques using MATLAB®.
The example illustrates the relationship among several frequently encountered variables in signal
detection, namely, probability of detection (Pd), probability of false alarm (Pfa) and signal to noise
ratio (SNR). In particular, the example calculates the performance of the detector using Monte-Carlo
simulations and verifies the results of the metrics with the receiver operating characteristic (ROC)
curves.
There are two SNR values we encounter in detecting a signal. The first one is the SNR of a single
data sample. This is the SNR value appeared in a ROC curve plot. A point on ROC gives the required
single sample SNR necessary to achieve the corresponding Pd and Pfa. However, it is NOT the SNR
threshold used for detection. Using the Neyman-Pearson decision rule, the SNR threshold, the second
SNR value we see in the detection, is determined by the noise distribution and the desired Pfa level.
Therefore, such an SNR threshold indeed corresponds to the Pfa axis in a ROC curve. If we fix the
SNR of a single sample, as depicted in the above ROC curve plots, each point on the curve will
correspond to a Pfa value, which in turn translates to an SNR threshold value. Using this particular
SNR threshold to perform the detection will then result in the corresponding Pd.
Note that an SNR threshold may not be the threshold used directly in the actual detector. The actual
detector normally uses an easy to compute sufficient statistic quantity to perform the detection. Thus,
the true threshold has to be derived from the aforementioned SNR threshold accordingly so that it is
consistent with the choice of sufficient statistics.
This example performs the detection using only one received signal sample. Hence, the resulting Pd
is fairly low and there is no processing gain achieved by the matched filter. To improve Pd and to take
advantage of the processing gain of the matched filter, we can use multiple samples, or even multiple
pulses, of the received signal. For more information about how to detect a signal using multiple
samples or pulses, please refer to the example “Signal Detection Using Multiple Samples” on page
17-314.
17-313
17 Featured Examples
Introduction
The example, “Signal Detection in White Gaussian Noise” on page 17-307, introduces a basic signal
detection problem. In that example, only one sample of the received signal is used to perform the
detection. This example involves more samples in the detection process to improve the detection
performance.
As in the previous example, assume that the signal power is 1 and the single sample signal to noise
ratio (SNR) is 3 dB. The number of Monte Carlo trials is 100000. The desired probability of false
alarm (Pfa) level is 0.001.
Ntrial = 1e5; % number of Monte Carlo trials
Pfa = 1e-3; % Pfa
snrdb = 3; % SNR in dB
snr = db2pow(snrdb); % SNR in linear scale
npower = 1/snr; % noise power
namp = sqrt(npower/2); % noise amplitude in each channel
As discussed in the previous example, the threshold is determined based on Pfa. Therefore, as long as
the threshold is chosen, the Pfa is fixed, and vice versa. Meanwhile, one certainly prefers to have a
higher probability of detection (Pd). One way to achieve that is to use multiple samples to perform the
detection. For example, in the previous case, the SNR at a single sample is 3 dB. If one can use
multiple samples, then the matched filter can produce an extra gain in SNR and thus improve the
performance. In practice, one can use a longer waveform to achieve this gain. In the case of discrete
time signal processing, multiple samples can also be obtained by increasing the sampling frequency.
For a coherent receiver, the signal, noise and threshold are given by
% fix the random number generator
rstream = RandStream.create('mt19937ar','seed',2009);
s = wf*ones(1,Ntrial);
n = namp*(randn(rstream,Nsamp,Ntrial)+1i*randn(rstream,Nsamp,Ntrial));
snrthreshold = db2pow(npwgnthresh(Pfa, 1,'coherent'));
mfgain = mf'*mf;
threshold = sqrt(npower*mfgain*snrthreshold); % Final threshold T
17-314
Signal Detection Using Multiple Samples
Pd = 0.3947
Pfa = 0.0011
snrdb_new = 6.0103
One can see from the figure that the point given by Pfa and Pd falls right on the curve. Therefore, the
SNR corresponding to the ROC curve is the SNR of a single sample at the output of the matched
filter. This shows that, although one can use multiple samples to perform the detection, the single
sample threshold in SNR (snrthreshold in the program) does not change compared to the simple
sample case. There is no change because the threshold value is essentially determined by Pfa.
However, the final threshold, T, does change because of the extra matched filter gain. The resulting
17-315
17 Featured Examples
Pfa remains the same compared to the case where only one sample is used to do the detection.
However, the extra matched gain improved the Pd from 0.1390 to 0.3947.
One can run similar cases for the noncoherent receiver to verify the relation among Pd, Pfa and SNR.
Radar and sonar applications frequently use pulse integration to further improve the detection
performance. If the receiver is coherent, the pulse integration is just adding real parts of the matched
filtered pulses. Thus, the SNR improvement is linear when one uses the coherent receiver. If one
integrates 10 pulses, then the SNR is improved 10 times. For a noncoherent receiver, the relationship
is not that simple. The following example shows the use of pulse integration with a noncoherent
receiver.
Assume an integration of 2 pulses. Then, construct the received signal and apply the matched filter to
it.
PulseIntNum = 2;
Ntotal = PulseIntNum*Ntrial;
s = wf*exp(1i*2*pi*rand(rstream,1,Ntotal)); % noncoherent
n = sqrt(npower/2)*...
(randn(rstream,Nsamp,Ntotal)+1i*randn(rstream,Nsamp,Ntotal));
One can integrate the pulses using either of two possible approaches. Both approaches are related to
the approximation of the modified Bessel function of the first kind, which is encountered in modeling
the likelihood ratio test (LRT) of the noncoherent detection process using multiple pulses. The first
approach is to sum abs(y)^2 across the pulses, which is often referred to as a square law detector.
The second approach is to sum together abs(y) from all pulses, which is often referred to as a linear
detector. For small SNR, square law detector is preferred while for large SNR, using linear detector
is advantageous. We use square law detector in this simulation. However, the difference between the
two kinds of detectors is normally within 0.2 dB.
For this example, choose the square law detector, which is more popular than the linear detector. To
perform the square law detector, one can use the pulsint function. The function treats each column of
the input data matrix as an individual pulse. The pulsint function performs the operation of
2 2
y= x1 + ⋯ + xn .
z = pulsint(y,'noncoherent');
The relation between the threshold T and the Pfa, given this new sufficient statistics, z, is given by
T 2 /(NM) SNR
Pf a = 1 − I ,L−1 = 1− I ,L−1 .
L L
where
u K + 1 e−ττ K
I(u, K) = ∫
0 K!
dτ
17-316
Signal Detection Using Multiple Samples
is Pearson's form of the incomplete gamma function and L is the number of pulses used for pulse
integration. Using a square law detector, one can calculate the SNR threshold involving the pulse
integration using the npwgnthresh function as before.
snrthreshold = db2pow(npwgnthresh(Pfa,PulseIntNum,'noncoherent'));
mfgain = mf'*mf;
threshold = sqrt(npower*mfgain*snrthreshold);
Pd = sum(z>threshold)/Ntrial
Pd = 0.5343
Then, calculate the Pfa when the received signal is noise only using the noncoherent detector with 2
pulses integrated.
x = n;
y = mf'*x;
y = reshape(y,Ntrial,PulseIntNum);
z = pulsint(y,'noncoherent');
Pfa = sum(z>threshold)/Ntrial
Pfa = 0.0011
To plot the ROC curve with pulse integration, one has to specify the number of pulses used in
integration in rocsnr function
rocsnr(snrdb_new,'SignalType','NonfluctuatingNoncoherent',...
'MinPfa',1e-4,'NumPulses',PulseIntNum);
17-317
17 Featured Examples
Again, the point given by Pfa and Pd falls on the curve. Thus, the SNR in the ROC curve specifies the
SNR of a single sample used for the detection from one pulse.
Such an SNR value can also be obtained from Pd and Pfa using Albersheim's equation. The result
obtained from Albersheim's equation is just an approximation, but is fairly good over frequently used
Pfa, Pd and pulse integration range.
Note: Albersheim's equation has many assumptions, such as the target is nonfluctuating (Swirling
case 0 or 5), the noise is complex, white Gaussian, the receiver is noncoherent and the linear detector
is used for detection (square law detector for nonfluctuating target is also ok).
To calculate the necessary single sample SNR to achieve a certain Pd and Pfa, use the albersheim
function as
snr_required = albersheim(Pd,Pfa,PulseIntNum)
snr_required = 6.0009
This calculated required SNR value matches the new SNR value of 6 dB.
To see the improvement achieved in Pd by pulse integration, plot the ROC curve when there is no
pulse integration used.
rocsnr(snrdb_new,'SignalType','NonfluctuatingNoncoherent',...
'MinPfa',1e-4,'NumPulses',1);
17-318
Signal Detection Using Multiple Samples
From the figure, one can see that without pulse integration, Pd can only be around 0.24 with Pfa at
1e-3. With 2-pulse integration, as illustrated in the above Monte Carlo simulation, for the same Pfa,
the Pd is around 0.53.
Summary
This example showed how using multiple signal sample in detection can improve the probability of
detection while maintaining a desired probability of false alarm level. In particular, it showed using
either longer waveform or pulse integration technique to improve Pd. The example illustrates the
relation among Pd, Pfa, ROC curve and Albersheim's equation. The performance is calculated using
Monte Carlo simulations.
17-319
17 Featured Examples
Introduction
To properly evaluate the performance of radar and wireless communication systems, it is critical to
understand the propagation environment. Using radar as an example, the received signal power of a
monostatic radar is given by the radar range equation:
where is the transmitted power, is the antenna gain, is the target radar cross section (RCS),
is the wavelength, and is the propagation distance. All propagation losses other than free space
path loss are included in the term. The rest of example shows how to estimate this term in
different scenarios.
First, the free space path loss is computed as a function of propagation distance and frequency. In
free space, RF signals propagate at a constant speed of light in all directions. At a far enough
distance, the radiating source looks like a point in space and the wavefront forms a sphere whose
radius is equal to . The power density at the wavefront is inversely proportional to
where is the transmitted signal power. For a monostatic radar where the signal has to travel both
directions (from the source to the target and back), the dependency is actually inversely proportional
to , as shown previously in the radar equation. The loss related to this propagation mechanism is
referred to as free space path loss, sometimes also called the spreading loss. Quantitatively, free
space path loss is also a function of frequency, given by [5]
As a convention, propagation losses are often expressed in dB. This convention makes it much easier
to derive the two-way free space path loss by simply doubling the one-way free space loss.
The following figure plots how the free space path loss changes over the frequency between 10 to
1000 GHz for different ranges.
c = physconst('lightspeed');
R0 = [100 1e3 10e3];
freq = (10:1000).'*1e9;
apathloss = fspl(R0,c./freq);
loglog(freq/1e9,apathloss);
grid on; ylim([90 200])
17-320
Modeling the Propagation of RF Signals
The figure illustrates that the propagation loss increases with range and frequency.
In reality, signals don't travel in a vacuum, so free space path loss describes only part of the signal
attenuation. Signals interact with particles in the air and lose energy along the propagation path. The
loss varies with different factors such as pressure, temperature, water density.
Rain can be a major limiting factor for a radar systems, especially when operating above 5 GHz. In
the ITU model in [2], rain is characterized by the rain rate (in mm/h). According to [6], the rain rate
can range from less than 0.25 mm/h for very light rain to over 50 mm/h for extreme rains. In addition,
because of the rain drop's shape and its relative size compared to the RF signal wavelength, the
propagation loss due to rain is also a function of signal polarization.
The following plot shows how losses due to rain varies with frequency. The plot assumes the
polarization to be horizontal, so the tilt angle is 0. In addition, assume that the signal propagates
parallel to the ground, so the elevation angle is 0. In general, horizontal polarization represents the
worse case for propagation loss due to rain.
R0 = 1e3; % 1 km range
rainrate = [1 4 16 50]; % rain rate in mm/h
el = 0; % 0 degree elevation
17-321
17 Featured Examples
for m = 1:numel(rainrate)
rainloss(:,m) = rainpl(R0,freq,rainrate(m),el,tau)';
end
loglog(freq/1e9,rainloss); grid on;
legend('Light rain','Moderate rain','Heavy rain','Extreme rain', ...
'Location','SouthEast');
xlabel('Frequency (GHz)');
ylabel('Rain Attenuation (dB/km)')
title('Rain Attenuation for Horizontal Polarization');
Similar to rainfall, snow can also have a significant impact on the propagation of RF signals. However,
there is no specific model to compute the propagation loss due to snow. The common practice is to
treat it as rainfall and compute the propagation loss based on the rain model, even though this
approach tends to overestimate the loss a bit.
Fog and cloud are formed with water droplets too, although much smaller compared to rain drops.
The size of fog droplets are generally less than 0.01 cm. Fog is often characterized by the liquid water
density. A medium fog with a visibility of roughly 300 meters, has a liquid water density of 0.05 g/
m^3. For heavy fog where the visibility drops to 50 meters, the liquid water density is about 0.5 g/
m^3. The atmosphere temperature (in Celsius) is also present in the ITU model for propagation loss
due to fog and cloud [3].
The next plot shows how the propagation loss due to fog varies with frequency.
17-322
Modeling the Propagation of RF Signals
Even when there is no fog or rain, the atmosphere is full of gases that still affect the signal
propagation. The ITU model [4] describes atmospheric gas attenuation as a function of both dry air
pressure, like oxygen, measured in hPa, and water vapour density, measured in g/m^3.
The plot below shows how the propagation loss due to atmospheric gases varies with the frequency.
Assume a dry air pressure of 1013 hPa at 15 degrees Celsius, and a water vapour density of 7.5 g/
m^3.
17-323
17 Featured Examples
xlabel('Frequency (GHz)');
ylabel('Atmospheric Gas Attenuation (dB/km)')
title('Atmospheric Gas Attenuation');
The plot suggests that there is a strong absorption due to atmospheric gases at around 60 GHz.
The next figure compares all weather related losses for a 77 GHz automotive radar. The horizontal
axis is the target distance from the radar. The maximum distance of interest is about 200 meters.
R = (1:200).';
fc77 = 77e9;
apathloss = fspl(R,c/fc77);
agasloss = gaspl(R,fc77,T,P,ROU);
grid on;
xlabel('Propagation Distance (m)');
ylabel('Path Loss (dB)');
17-324
Modeling the Propagation of RF Signals
legend('Free space','Rain','Fog','Gas','Location','Best')
title('Path Loss for 77 GHz Radar');
The plot suggests that for a 77 GHz automotive radar, the free space path loss is the dominant loss.
Losses from fog and atmospheric gasses are negligible, accounting for less than 0.5 dB. The loss from
rain can get close to 3 dB at 180 m.
Functions mentioned above for computing propagation losses, are useful to establish budget links. To
simulate the propagation of arbitrary signals, we also need to apply range-dependent time delays,
gains and phase shifts.
First, define the transmitted signal. A rectangular waveform will be used in this case
waveform = phased.RectangularWaveform;
wav = waveform();
Assume the radar is at the origin and the target is at a 5 km range, of the direction of 45 degrees
azimuth and 10 degrees elevation. In addition, assume the propagation is along line of sight (LOS), a
heavy rain rate of mm/h with no fog.
Rt = 5e3;
az = 45;
17-325
17 Featured Examples
el = 10;
pos_tx = [0;0;0];
pos_rx = [Rt*cosd(el)*cosd(az);Rt*cosd(el)*sind(az);Rt*sind(el)];
vel_tx = [0;0;0];
vel_rx = [0;0;0];
loschannel = phased.LOSChannel(...
'PropagationSpeed',c,...
'OperatingFrequency',fc,...
'SpecifyAtmosphere',true,...
'Temperature',T,...
'DryAirPressure',P,...
'WaterVapourDensity',ROU,...
'LiquidWaterDensity',0,... % No fog
'RainRate',rr,...
'TwoWayPropagation', true)
loschannel =
PropagationSpeed: 299792458
OperatingFrequency: 2.4000e+10
SpecifyAtmosphere: true
Temperature: 15
DryAirPressure: 101300
WaterVapourDensity: 7.5000
LiquidWaterDensity: 0
RainRate: 16
TwoWayPropagation: true
SampleRate: 1000000
MaximumDistanceSource: 'Auto'
L_total =
289.3914
To verify the power loss obtained from the simulation, compare it with the result from the analysis
below and make sure they match.
Lfs = 2*fspl(Rt,c/fc);
Lr = 2*rainpl(Rt,fc,rr,el,tau);
Lg = 2*gaspl(Rt,fc,T,P,ROU);
L_analysis = Lfs+Lr+Lg
L_analysis =
17-326
Modeling the Propagation of RF Signals
289.3514
Multipath Propagation
Signals may not always propagate along the line of sight. Instead, some signals can arrive at the
destination via different paths through reflections and may add up either constructively or
destructively. This multipath effect can cause significant fluctuations in the received signal.
Ground reflection is a common phenomenon for many radar or wireless communication systems. For
example, when a base station sends a signal to a mobile unit, the signal not only propagates directly
to the mobile unit but is also reflected from the ground.
Assume an operating frequency of 1900 MHz, as used in LTE, such a channel can be modeled as
fc = 1900e6;
tworaychannel = phased.TwoRayChannel('PropagationSpeed',c,...
'OperatingFrequency',fc);
Assume the mobile unit is 1.6 meters above the ground, the base station is 100 meters above the
ground at a 500 meters distance. Simulate the signal received by the mobile unit.
pos_base = [0;0;100];
pos_mobile = [500;0;1.6];
vel_base = [0;0;0];
vel_mobile = [0;0;0];
y2ray = tworaychannel(wav,pos_base,pos_mobile,vel_base,vel_mobile);
L_2ray = pow2db(bandpower(wav))-pow2db(bandpower(y2ray))
L_2ray =
109.1524
L_ref = fspl(norm(pos_mobile-pos_base),c/fc)
L_ref =
92.1673
The result suggests that in this configuration, the channel introduces an extra 17 dB loss to the
received signal compared to the free space case. Now assume the mobile user is a bit taller and holds
the mobile unit at 1.8 meters above the ground. Repeating the simulation above suggests that this
time the ground reflection actually provides a 6 dB gain! Although free space path loss is essentially
the same in the two scenarios, a 20 cm move caused a 23 dB fluctuation in signal power.
pos_mobile = [500;0;1.8];
y2ray = tworaychannel(wav,pos_base,pos_mobile,vel_base,vel_mobile);
17-327
17 Featured Examples
L_2ray = pow2db(bandpower(wav))-pow2db(bandpower(y2ray))
L_ref = fspl(norm(pos_mobile-pos_base),c/fc)
L_2ray =
86.2165
L_ref =
92.1666
Increasing a system's bandwidth increases the capacity of its channel. This enables higher data rates
in communication systems and finer range resolutions for radar systems. The increased bandwidth
can also improve robustness to multipath fading for both systems.
Typically, wideband systems operate with a bandwidth of greater than 5% of their center frequency.
In contrast, narrowband systems operate with a bandwidth of 1% or less of the system's center
frequency.
The narrowband channel in the preceding section was shown to be very sensitive to multipath fading.
Slight changes in the mobile unit's height resulted in considerable signal losses. The channel's fading
characteristics can be plotted by varying the mobile unit's height across a span of operational heights
for this wireless communication system. A span of heights from 10cm to 3m is chosen to cover a likely
range for mobile unit usage.
% Simulate the signal fading at mobile unit for heights from 10cm to 3m
hMobile = linspace(0.1,3);
pos_mobile = repmat([500;0;1.6],[1 numel(hMobile)]);
pos_mobile(3,:) = hMobile;
vel_mobile = repmat([0;0;0],[1 numel(hMobile)]);
release(tworaychannel);
y2ray = tworaychannel(repmat(wav,[1 numel(hMobile)]),...
pos_base,pos_mobile,vel_base,vel_mobile);
The signal loss observed at the mobile unit for the narrowband system can now be plotted.
L2ray = pow2db(bandpower(wav))-pow2db(bandpower(y2ray));
plot(hMobile,L2ray);
xlabel('Mobile Unit''s Height (m)');
ylabel('Channel Loss (dB)');
title('Multipath Fading Observed at Mobile Unit');
grid on;
17-328
Modeling the Propagation of RF Signals
The sensitivity of the channel loss to the mobile unit's height for this narrowband system is clear.
Deep signal fades occur at heights that are likely to be occupied by the system's users.
Increasing the channel's bandwidth can improve the communication link's robustness to these
multipath fades. To do this, a wideband waveform is defined with a bandwidth of 10% of the link's
center frequency.
bw = 0.10*fc;
pulse_width = 1/bw;
fs = 2*bw;
waveform = phased.RectangularWaveform('SampleRate',fs,...
'PulseWidth',pulse_width);
wav = waveform();
A wideband two-ray channel model is also required to simulate the multipath reflections of this
wideband signal off of the ground between the base station and the mobile unit and to compute the
corresponding channel loss.
widebandTwoRayChannel = ...
phased.WidebandTwoRayChannel('PropagationSpeed',c,...
'OperatingFrequency',fc,'SampleRate',fs);
The received signal at the mobile unit for various operational heights can now be simulated for this
wideband system.
y2ray_wb = widebandTwoRayChannel(repmat(wav,[1 numel(hMobile)]),...
pos_base,pos_mobile,vel_base,vel_mobile);
17-329
17 Featured Examples
L2ray_wb = pow2db(bandpower(wav))-pow2db(bandpower(y2ray_wb));
hold on;
plot(hMobile,L2ray_wb);
hold off;
legend('Narrowband','Wideband');
As expected, the wideband channel provides much better performance across a wide range of heights
for the mobile unit. In fact, as the height of the mobile unit increases, the impact of multipath fading
almost completely disappears. This is because the difference in propagation delay between the direct
and bounce path signals is increasing, reducing the amount of coherence between the two signals
when received at the mobile unit.
Conclusion
This example provides a brief overview of RF propagation losses due to atmospheric and weather
effects. It also introduces multipath signal fluctuations due to bounces on the ground. It highlighted
functions and objects to calculate attenuation losses and simulate range-dependent time delays and
Doppler shifts.
References
17-330
Modeling the Propagation of RF Signals
17-331
17 Featured Examples
Introduction
A radar system relies on target reflection or scattering to detect and identify targets. The more
strongly a target reflects, the greater the returned echo at the radar receiver, resulting in a higher
signal-to-noise ratio (SNR) and likelier detection. In radar systems, the amount of energy reflected
from a target is determined by the radar cross section (RCS), defined as
where represents the RCS, is the distance between the radar and the target, is the field
strength of the signal reflected from the target, and is the field strength of the signal incident on
the target. In general, targets scatter energy in all directions and the RCS is a function of the incident
angle, the scattering angle, and the signal frequency. RCS depends on the shape of the target and the
materials from which it is constructed. Common units used for RCS include square meters or dBsm.
This example focuses on narrowband monostatic radar systems, when the transmitter and receiver
are co-located. The incident and scattered angles are equal and the RCS is a function only of the
incident angle. This is the backscattered case. For a narrowband radar, the signal bandwidth is small
compared to the operating frequency and is therefore considered to be constant.
The simplest target model is an isotropic scatterer. An example of an isotropic scatterer is a metallic
sphere of uniform density. In this case, the reflected energy is independent of the incident angle. An
isotropic scatterer can often serve as a first order approximation of a more complex point target that
is distant from the radar. For example, a pedestrian can be approximated by an isotropic scatterer
with a 1 square meter RCS.
c = 3e8;
fc = 3e8;
pedestrian = phased.RadarTarget('MeanRCS',1,'PropagationSpeed',c,...
'OperatingFrequency',fc)
pedestrian =
EnablePolarization: false
MeanRCSSource: 'Property'
MeanRCS: 1
Model: 'Nonfluctuating'
PropagationSpeed: 300000000
OperatingFrequency: 300000000
17-332
Modeling Target Radar Cross Section
where c is the propagation speed and fc is the operating frequency of the radar system. The
scattered signal from a unit input signal can then be computed as
x = 1;
ped_echo = pedestrian(x)
ped_echo =
3.5449
where x is the incident signal. The relation between the incident and the reflected signal can be
expressed as where
represents the dimensionless gain that results from the target reflection. is the wavelength
corresponding to the system's operating frequency.
For targets with more complex shapes, reflections can non longer be considered the same across all
directions. The RCS varies with the incident angles (also known as aspect angles). Aspect-dependent
RCS patterns can be measured or modeled just as you would antenna radiation patterns. The result of
such measurements or models is a table of RCS values as a function of azimuth and elevation angles
in the target's local coordinate system.
The example below first computes the RCS pattern of a cylindrical target, with a radius of 1 meter
and a height of 10 meters, as a function of azimuth and elevation angles.
17-333
17 Featured Examples
[cylrcs,az,el] = rcscylinder(1,1,10,c,fc);
Because the cylinder is symmetric around the z axis, there is no azimuth-angle dependency. RCS
values vary only with elevation angle.
helperTargetRCSPatternPlot(az,el,cylrcs);
title('RCS Pattern of Cylinder');
17-334
Modeling Target Radar Cross Section
plot(el,pow2db(cylrcs));
grid; axis tight; ylim([-30 30]);
xlabel('Elevation Angles (degrees)');
ylabel('RCS (dBsm)');
title('RCS Pattern for Cylinder');
17-335
17 Featured Examples
cylindricalTarget =
EnablePolarization: false
AzimuthAngles: [1x361 double]
ElevationAngles: [1x181 double]
RCSPattern: [181x361 double]
Model: 'Nonfluctuating'
PropagationSpeed: 300000000
OperatingFrequency: 300000000
Finally, generate the target reflection. Assume three equal signals are reflected from the target at
three different aspect angles. The first two angles have the same elevation angle but with different
azimuth angles. The last has a different elevation angle from the first two.
x = [1 1 1]; % 3 unit signals
ang = [0 30 30;0 0 30]; % 3 directions
cyl_echo = cylindricalTarget(x,ang)
17-336
Modeling Target Radar Cross Section
cyl_echo =
One can verify that there is no azimuth angle dependence because the first two outputs are the same.
The number of target shapes for which analytically-derived RCS patterns exist are few. For more
complicated shapes and materials, computational electromagnetics approaches, such as method of
moments (MoM), or finite element analysis (FEM), can be used to accurately predict the RCS pattern.
A more detailed discussion of these techniques is available in [1]. You can use the output of these
computations as input to the phased.BackscatterRadarTarget System object™ as was done in
the cylinder example before.
Although computational electromagnetic approaches can provide accurate RCS predictions, they
often require a significant amount of computation and are not suitable for real-time simulations. An
alternative approach for describing a complex targets is to model it as a collection of simple
scatterers. The RCS pattern of the complex target can then be derived from the RCS patterns of the
simple scatterer as [1]
where is the RCS of the target, is the RCS of the th scatterer, and is the relative phase of the
th scatterer. A multi-scatterer target behaves much like an antenna array.
The next section shows how to model a target consisting of four scatterers. The scatterers are located
at the four vertices of a square. Each scatterer is a cylindrical point target as derived in the previous
section. Without loss of generality, the square is placed in the xy -plane. The side length of the square
is 0.5 meter.
17-337
17 Featured Examples
If the target is in the far field of the transmitter, the incident angle for each component scatterer is
the same. Then, the total RCS pattern can be computed as
naz = numel(az);
nel = numel(el);
extrcs = zeros(nel,naz);
for m = 1:nel
sv = steervec(scatpos,[az;el(m)*ones(1,naz)]);
% sv is squared due to round trip in a monostatic scenario
extrcs(m,:) = abs(sqrt(cylrcs(m,:)).*sum(sv.^2)).^2;
end
helperTargetRCSPatternPlot(az,el,extrcs);
title('RCS Pattern of Extended Target with 4 Scatterers');
17-338
Modeling Target Radar Cross Section
extendedTarget = phased.BackscatterRadarTarget('PropagationSpeed',c,...
'OperatingFrequency',fc,'AzimuthAngles',az,'ElevationAngles',el,...
'RCSPattern',extrcs);
ext_echo = extendedTarget(x,ang)
ext_echo =
Wideband radar systems are typically defined as having a bandwidth greater than 5% of their center
frequency. In addition to improved range resolution, wideband systems also offer improved target
detection. One way in which wideband systems improve detection performance is by filling in fades in
a target's RCS pattern. This can be demonstrated by revisiting the extended target comprised of 4
cylindrical scatterers used in the preceding section. The modeled narrowband RCS swept across
various target aspects is shown as
17-339
17 Featured Examples
[elg,azg] = meshgrid(sweepel,sweepaz);
sweepang = [azg(:)';elg(:)'];
x = ones(1,size(sweepang,2)); % unit signals
release(extendedTarget);
extNarrowbandSweep = extendedTarget(x,sweepang);
clf;
plot(sweepaz,pow2db(extNarrowbandSweep));
grid on; axis tight;
xlabel('Azimuth Angles (degrees)');
ylabel('RCS (dBsm)');
title(['RCS Pattern at 0^o Elevation ',...
'for Extended Target with 4 Scatterers']);
Returns from the multiple cylinders in the extended target model coherently combine, creating deep
fades between 40 and 50 degrees. These fades can cause the target to not be detected by the radar
sensor.
Next, the RCS pattern for a wideband system operating at the same center frequency will be
examined. The bandwidth for this system will be set to 10% of the center frequency
A wideband RCS model is created as was previously done for the narrowband extended target. Often,
RCS models are generated offline using either simulation tools or range measurements and are then
17-340
Modeling Target Radar Cross Section
provided to radar engineers for use in their system models. Here, it is assumed that the provided RCS
model has been sampled at 1MHz intervals on either side of the radar's center frequency.
modelFreq = (-80e6:1e6:80e6)+fc;
[modelCylRCS,modelAz,modelEl] = helperCylinderRCSPattern(c,modelFreq);
The contributions from the various scattering centers are modeled as before. It is important to note
that this approximation assumes that all of the target's scattering centers fall within the same range
resolution bin, which is true for this example.
nf = numel(modelFreq);
naz = numel(modelAz);
nel = numel(modelEl);
modelExtRCS = zeros(nel,naz,nf);
for k = 1:nf
for m = 1:nel
pos = scatpos*modelFreq(k)/fc;
sv = steervec(pos,[modelAz;modelEl(m)*ones(1,naz)]);
% sv is squared due to round trip in a monostatic scenario
modelExtRCS(m,:,k) = abs(sqrt(modelCylRCS(m,:,k)).*sum(sv.^2)).^2;
end
end
The wideband RCS target model is now generated, using the RCS patterns that were just computed.
widebandExtendedTarget = phased.WidebandBackscatterRadarTarget(...
'PropagationSpeed',c,'OperatingFrequency',fc,'SampleRate',fs,...
'AzimuthAngles',modelAz,'ElevationAngles',modelEl,...
'FrequencyVector',modelFreq,'RCSPattern',modelExtRCS);
The modeled wideband RCS can now be compared to the narrowband system
extWidebandSweep = widebandExtendedTarget(x,sweepang);
hold on;
plot(sweepaz,pow2db(extWidebandSweep));
hold off;
legend('Narrowband','Wideband');
17-341
17 Featured Examples
The target's RCS pattern now has much shallower nulls between 40 and 50 degrees azimuth. The
deep nulls in the narrowband pattern occur when signals combine destructively at a specific
frequency and azimuth combination. The wideband waveform fills in these fades because, while a few
frequencies may experience nulls for a given aspect, the majority of the bandwidth does not lie within
the null at that azimuth angle.
The discussion so far assumes that the target RCS value is constant over time. This is the
nonfluctuating target case. In reality, because both the radar system and the target are moving, the
RCS value changes over time. This case is a fluctuating target. To simulate fluctuating targets, Peter
Swerling developed four statistical models, referred to as Swerling 1 through Swerling 4, that are
widely adopted in practice. The Swerling models divide fluctuating targets into two probability
distributions and two time varying behaviors as shown in the following table:
The RCS of a slow-fluctuating target remains constant during a dwell but varies from scan to scan. In
contrast, the RCS for a fast fluctuating target changes with each pulse within a dwell.
The Swerling 1 and 2 models obey an exponential density function (pdf) given by
17-342
Modeling Target Radar Cross Section
These models are useful in simulating a target consisting of a collection of equal strength scatterers.
The Swerling 3 and 4 models obey a 4th degree Chi-square pdf, given by
These models apply when the target contains a dominant scattering component. In both pdf
definitions, represents the mean RCS value, which is the RCS value of the same target under the
nonfluctuating assumption.
The next section shows how to apply a Swerling 1 statistical model when generating the radar echo
from the previously described cylindrical target.
cylindricalTargetSwerling1 = ...
phased.BackscatterRadarTarget('PropagationSpeed',c,...
'OperatingFrequency',fc,'AzimuthAngles',az,'ElevationAngles',el,...
'RCSPattern',cylrcs,'Model','Swerling1')
cylindricalTargetSwerling1 =
EnablePolarization: false
AzimuthAngles: [1x361 double]
ElevationAngles: [1x181 double]
RCSPattern: [181x361 double]
Model: 'Swerling1'
PropagationSpeed: 300000000
OperatingFrequency: 300000000
SeedSource: 'Auto'
In the Swerling 1 case, the reflection is no longer constant. The RCS value varies from scan to scan.
Assuming that the target is illuminated by the signal only once per dwell, the following code
simulates the reflected signal power for 10,000 scans for a unit incident signal.
N = 10000;
tgt_echo = zeros(1,N);
x = 1;
for m = 1:N
tgt_echo(m) = cylindricalTargetSwerling1(x,[0;0],true);
end
p_echo = tgt_echo.^2; % Reflected power
Plot the histogram of returns from all scans and verify that the distribution of the returns match the
theoretical prediction. The theoretical prediction uses the nonfluctuating RCS derived before. For the
cylindrical target, the reflected signal power at normal incidence for unit power input signal is
p_n = cyl_echo(1)^2;
helperTargetRCSReturnHistogramPlot(p_echo,p_n)
17-343
17 Featured Examples
The target RCS is also a function of polarization. To describe the polarization signature of a target, a
single RCS value is no longer sufficient. Instead, for each frequency and incident angle, a scattering
matrix is used to describe the interaction of the target with the incoming signal's polarization
components. This example will not go into further details because this topic is covered in the
“Modeling and Analyzing Polarization” on page 17-25 example.
Conclusion
This example gave a brief introduction to radar target modeling for a radar system simulation. It
showed how to model point targets, targets with measured patterns, and extended targets. It also
described how to take statistical fluctuations into account when generating target echoes.
Reference
[1] Merrill Skolnik, Radar Handbook, 2nd Ed. Chapter 11, McGraw-Hill, 1990
[2] Bassem Mahafza, Radar Systems Analysis and Design Using MATLAB, 2nd Ed. Chapman & Hall/
CRC, 2005
17-344
Antenna Array Beam Scanning Visualization on a Map
Use Antenna Toolbox to design a reflector-backed dipole antenna element. Design the element and its
exciter for 10 GHz, and specify tilt to direct radiation in the xy-plane, which corresponds to the
geographic azimuth.
Use Phased Array System Toolbox to create a 7-by-7 rectangular array from the antenna element.
Specify the array normal to direct radiation in the x-axis direction.
17-345
17 Featured Examples
Create a transmitter site at the Washington Monument in Washington, DC using the antenna array.
The transmitter frequency matches the antenna's design frequency, and the transmitter output power
is 1 W. Set antenna height to 169 m, which is the height of the monument.
tx = txsite('Name','Washington Monument',...
'Latitude',38.88949, ...
'Longitude',-77.03523, ...
'Antenna',myarray,...
'AntennaHeight',169', ...
'TransmitterFrequency',fq,...
'TransmitterPower',1);
Launch Site Viewer and show the transmitter site, which centers the view at the Washington
Monument. The default map shows satellite imagery, and the site marker is shown at the site's
antenna height.
if isvalid(f)
close(f)
end
viewer = siteviewer;
show(tx)
17-346
Antenna Array Beam Scanning Visualization on a Map
Visualize the orientation of the antenna by showing the radiation pattern in Site Viewer.
pattern(tx);
Select the site marker to view the color legend of the pattern.
17-347
17 Featured Examples
Create an array of receiver sites in the Washington, DC area. These are used as place markers for
sites of interest to assess the coverage of the transmitter site.
17-348
Antenna Array Beam Scanning Visualization on a Map
38.8783 -77.0685];
% Create array of receiver sites. Each receiver has a sensitivity of -75 dBm.
rxs = rxsite('Name',rxNames, ...
'Latitude',rxLocations(:,1), ...
'Longitude',rxLocations(:,2), ...
'ReceiverSensitivity',-75);
show(rxs)
Set the map imagery using the Basemap property. Alternatively, open the map imagery picker in “Site
Viewer” (Antenna Toolbox) by clicking the second button from the right. Select "Streets" to see
streets and labels on the map.
viewer.Basemap = "streets";
17-349
17 Featured Examples
Scan the antenna beam by applying a taper for a range of angles. For each angle, update the
radiation pattern in Site Viewer. This approach of scanning the beam produces different patterns than
physically rotating the antenna, as could be achieved by setting AntennaAngle of the transmitter
site. This step is used to validate the orientation of the antenna's main beam.
% Sweep the angles and show the antenna pattern for each
for az = azsweep
sv = steeringVector(fq,[az; 0]);
myarray.Taper = sltaper.*sv';
% Update the radiation pattern. Use a larger size so the pattern is visible among the antenna
pattern(tx, 'Size', 2500,'Transparency',1);
end
17-350
Antenna Array Beam Scanning Visualization on a Map
Define three signal strength levels and corresponding colors to display on the coverage map. Each
color is visible where the received power for a mobile receiver meets the corresponding signal
strength. The received power includes the total power transmitted from the rectangular antenna
array.
The default orientation of the transmitter site points the antenna x-axis east, so that is the direction of
maximum coverage.
% Reset the taper to the starting taper
myarray.Taper = startTaper;
17-351
17 Featured Examples
The coverage map shows no coverage at the transmitter site and a couple of pockets of coverage
along the boresight direction before the main coverage area. The radiation pattern provides insight
into the coverage map by showing how the antenna power projects onto the map locations around the
transmitter.
17-352
Antenna Array Beam Scanning Visualization on a Map
Scan the antenna beam by applying a taper for a range of angles. For each angle, update the
coverage map. This method of beamscanning is the same method used above. The final map includes
two receiver sites of interest within the coverage region.
% Repeat the sweep but show the pattern and coverage map
for az = azsweep
% Calculate and assign taper from steering vector
sv = steeringVector(fq,[az; 0]);
myarray.Taper = sltaper.*sv';
17-353
17 Featured Examples
'MaxRange',maxRange)
end
17-354
SINR Map for a 5G Urban Macro-Cell Test Environment
The test environment guidelines for 5G technologies reuse the test network layout for 4G
technologies defined in Section 8.3 of Report ITU-R M.2135-1 [2], which is shown below. The layout
consists of 19 sites placed in a hexagonal layout, each with 3 cells. The distance between adjacent
sites is the inter-site distance (ISD) and depends on the test usage scenario. For the Dense Urban-
eMBB test environment, the ISD is 200 m.
Create the locations corresponding to cell sites in the network layout, using MathWorks Glasgow as
the center location.
% Initialize arrays for distance and angle from center location to each cell site, where
% each site has 3 cells
numCellSites = 19;
siteDistances = zeros(1,numCellSites);
siteAngles = zeros(1,numCellSites);
% Define distance and angle for inner ring of 6 sites (cells 4-21)
isd = 200; % Inter-site distance
siteDistances(2:7) = isd;
17-355
17 Featured Examples
siteAngles(2:7) = 30:60:360;
% Define distance and angle for middle ring of 6 sites (cells 22-39)
siteDistances(8:13) = 2*isd*cosd(30);
siteAngles(8:13) = 0:60:300;
% Define distance and angle for outer ring of 6 sites (cells 40-57)
siteDistances(14:19) = 2*isd;
siteAngles(14:19) = 30:60:360;
Each cell site has three transmitters corresponding to each cell. Create arrays to define the names,
latitudes, longitudes, and antenna angles of each cell transmitter.
% Initialize arrays for cell transmitter parameters
numCells = numCellSites*3;
cellLats = zeros(1,numCells);
cellLons = zeros(1,numCells);
cellNames = strings(1,numCells);
cellAngles = zeros(1,numCells);
% For each cell site location, populate data for each cell transmitter
cellInd = 1;
for siteInd = 1:numCellSites
% Compute site location using distance and angle from center site
[cellLat,cellLon] = location(centerSite, siteDistances(siteInd), siteAngles(siteInd));
Create transmitter sites using parameters defined above as well as configuration parameters defined
for Dense Urban-eMBB. Launch “Site Viewer” (Antenna Toolbox) and set the map imagery using the
Basemap property. Alternatively, open the basemap picker in Site Viewer by clicking the second
button from the right. Select "Topographic" to choose a basemap with topography, streets, and labels.
% Define transmitter parameters using Table 8-2 (b) of Report ITU-R M.[IMT-2020.EVAL]
fq = 4e9; % Carrier frequency (4 GHz) for Dense Urban-eMBB
antHeight = 25; % m
txPowerDBm = 44; % Total transmit power in dBm
txPower = 10.^((txPowerDBm-30)/10); % Convert dBm to W
17-356
SINR Map for a 5G Urban Macro-Cell Test Environment
'AntennaAngle',cellAngles, ...
'AntennaHeight',antHeight, ...
'TransmitterFrequency',fq, ...
'TransmitterPower',txPower);
Section 8.5 of ITU-R report [1] defines antenna characteristics for base station antennas. The antenna
is modeled as having one or more antenna panels, where each panel has one or more antenna
elements. Use Phased Array System Toolbox to implement the antenna element pattern defined in the
report.
17-357
17 Featured Examples
17-358
SINR Map for a 5G Urban Macro-Cell Test Environment
Visualize SINR for the test scenario using a single antenna element and the free space propagation
model. For each location on the map within the range of the transmitter sites, the signal source is the
cell with the greatest signal strength, and all other cells are sources of interference. Areas with no
color within the network indicate areas where the SINR is below the default threshold of -5 dB.
% Define receiver parameters using Table 8-2 (b) of Report ITU-R M.[IMT-2020.EVAL]
bw = 20e6; % 20 MHz bandwidth
rxNoiseFigure = 7; % dB
rxNoisePower = -174 + 10*log10(bw) + rxNoiseFigure;
rxGain = 0; % dBi
rxAntennaHeight = 1.5; % m
17-359
17 Featured Examples
Define an antenna array to increase directional gain and increase peak SINR values. Use Phased
Array System Toolbox to create an 8-by-8 uniform rectangular array.
17-360
SINR Map for a 5G Urban Macro-Cell Test Environment
Visualize SINR for the test scenario using a uniform rectangular antenna array and the free space
propagation model. Apply a mechanical downtilt to illuminate the intended ground area around each
transmitter.
% Assign the antenna array for each cell transmitter, and apply downtilt.
% Without downtilt, pattern is too narrow for transmitter vicinity.
downtilt = 15;
for tx = txs
tx.Antenna = cellAntenna;
tx.AntennaAngle = [tx.AntennaAngle; -downtilt];
end
17-361
17 Featured Examples
close(f)
end
sinr(txs,'freespace', ...
'ReceiverGain',rxGain, ...
'ReceiverAntennaHeight',rxAntennaHeight, ...
'ReceiverNoisePower',rxNoisePower, ...
'MaxRange',isd, ...
'Resolution',isd/20)
Visualize SINR for the test scenario using the Close-In propagation model [3], which models path loss
for 5G urban micro-cell and macro-cell scenarios. This model produces an SINR map that shows
reduced interference effects compared to the free space propagation model.
sinr(txs,'close-in', ...
'ReceiverGain',rxGain, ...
'ReceiverAntennaHeight',rxAntennaHeight, ...
'ReceiverNoisePower',rxNoisePower, ...
17-362
SINR Map for a 5G Urban Macro-Cell Test Environment
'MaxRange',isd, ...
'Resolution',isd/20)
The analysis above used an antenna element that was defined using the equations specified in the
ITU-R report [1]. The antenna element needs to provide a maximum gain of 9.5 dBi and a front-to-
back ratio of approximately 30 dB. Now replace the equation-based antenna element definition with a
real antenna model using a standard half-wavelength rectangular microstrip patch antenna. The
antenna element provides a gain of about 9 dBi, although with a lower front-to-back ratio.
17-363
17 Featured Examples
f = figure;
pattern(patchElement,fq)
Display SINR Map using the Patch Antenna Element in the 8-by-8 Array
Update the SINR map for the Close-In propagation model [3] using the patch antenna as the array
element. This analysis should capture the effect of deviations from an equation-based antenna
specification as per the ITU-R report [1], including:
17-364
SINR Map for a 5G Urban Macro-Cell Test Environment
Summary
This example shows how to construct a 5G urban macro-cell test environment consisting of a
hexagonal network of 19 cell sites, each containing 3 sectored cells. The signal-to-interference-plus-
noise ratio (SINR) is visualized on a map for different antennas. The following observations are made:
• A rectangular antenna array can provide greater directionality and therefore peak SINR values
than use of a single antenna element.
• The outward-facing lobes on the perimeter of the SINR map represent areas where less
interference occurs. A more realistic modelling technique would be to replicate, or wrap around,
cell sites to expand the geometry so that perimeter areas experience similar interference as
interior areas.
• Using a rectangular antenna array, a propagation model that estimates increased path loss also
results in higher SINR values due to less interference.
• Two antenna elements are tried in the antenna array: an equation-based element using Phased
Array System Toolbox and a patch antenna element using Antenna Toolbox. These produce similar
SINR maps.
17-365
17 Featured Examples
References
[1] Report ITU-R M.[IMT-2020.EVAL], "Guidelines for evaluation of radio interface technologies for
IMT-2020", 2017. https://www.itu.int/md/R15-SG05-C-0057
[2] Report ITU-R M.2135-1, "Guidelines for evaluation of radio interface technologies for IMT-
Advanced", 2009. https://www.itu.int/dms_pub/itu-r/opb/rep/R-REP-M.2135-1-2009-PDF-E.pdf
[3] Sun, S.,Rapport, T.S., Thomas, T., Ghosh, A., Nguyen, H., Kovacs, I., Rodriguez, I., Koymen, O.,and
Prartyka, A. "Investigation of prediction accuracy, sensitivity, and parameter stability of large-scale
propagation path loss models for 5G wireless communications." IEEE Transactions on Vehicular
Technology, Vol 65, No.5, pp.2843-2860, May 2016.
17-366
Automotive Adaptive Cruise Control Using FMCW Technology
FMCW Waveform
Consider an automotive long range radar (LRR) used for adaptive cruise control (ACC). This kind of
radar usually occupies the band around 77 GHz, as indicated in [1]. The radar system constantly
estimates the distance between the vehicle it is mounted on and the vehicle in front of it, and alerts
the driver when the two become too close. The figure below shows a sketch of ACC.
A popular waveform used in ACC system is FMCW. The principle of range measurement using the
FMCW technique can be illustrated using the following figure.
The received signal is a time-delayed copy of the transmitted signal where the delay, , is related to
the range. Because the signal is always sweeping through a frequency band, at any moment during
the sweep, the frequency difference, , is a constant between the transmitted signal and the received
signal. is usually called the beat frequency. Because the sweep is linear, one can derive the time
delay from the beat frequency and then translate the delay to the range.
17-367
17 Featured Examples
In an ACC setup, the maximum range the radar needs to monitor is around 200 m and the system
needs to be able to distinguish two targets that are 1 meter apart. From these requirements, one can
compute the waveform parameters.
fc = 77e9;
c = 3e8;
lambda = c/fc;
The sweep time can be computed based on the time needed for the signal to travel the unambiguous
maximum range. In general, for an FMCW radar system, the sweep time should be at least 5 to 6
times the round trip time. This example uses a factor of 5.5.
range_max = 200;
tm = 5.5*range2time(range_max,c);
The sweep bandwidth can be determined according to the range resolution and the sweep slope is
calculated using both sweep bandwidth and sweep time.
range_res = 1;
bw = range2bw(range_res,c);
sweep_slope = bw/tm;
Because an FMCW signal often occupies a huge bandwidth, setting the sample rate blindly to twice
the bandwidth often stresses the capability of A/D converter hardware. To address this issue, one can
often choose a lower sample rate. Two things can be considered here:
1 For a complex sampled signal, the sample rate can be set to the same as the bandwidth.
2 FMCW radars estimate the target range using the beat frequency embedded in the dechirped
signal. The maximum beat frequency the radar needs to detect is the sum of the beat frequency
corresponding to the maximum range and the maximum Doppler frequency. Hence, the sample
rate only needs to be twice the maximum beat frequency.
In this example, the beat frequency corresponding to the maximum range is given by
fr_max = range2beat(range_max,sweep_slope,c);
In addition, the maximum speed of a traveling car is about 230 km/h. Hence the maximum Doppler
shift and the maximum beat frequency can be computed as
v_max = 230*1000/3600;
fd_max = speed2dop(2*v_max,lambda);
fb_max = fr_max+fd_max;
This example adopts a sample rate of the larger of twice the maximum beat frequency and the
bandwidth.
fs = max(2*fb_max,bw);
17-368
Automotive Adaptive Cruise Control Using FMCW Technology
With all the information above, one can set up the FMCW waveform used in the radar system.
waveform = phased.FMCWWaveform('SweepTime',tm,'SweepBandwidth',bw,...
'SampleRate',fs);
This is a up-sweep linear FMCW signal, often referred to as sawtooth shape. One can examine the
time-frequency plot of the generated signal.
sig = waveform();
subplot(211); plot(0:1/fs:tm-1/fs,real(sig));
xlabel('Time (s)'); ylabel('Amplitude (v)');
title('FMCW signal'); axis tight;
subplot(212); spectrogram(sig,32,16,32,fs,'yaxis');
title('FMCW signal spectrogram');
Target Model
The target of an ACC radar is usually a car in front of it. This example assumes the target car is
moving 50 m ahead of the car with the radar, at a speed of 96 km/h along the x-axis.
The radar cross section of a car, according to [1], can be computed based on the distance between
the radar and the target car.
17-369
17 Featured Examples
car_dist = 43;
car_speed = 96*1000/3600;
car_rcs = db2pow(min(10*log10(car_dist)+5,20));
cartarget = phased.RadarTarget('MeanRCS',car_rcs,'PropagationSpeed',c,...
'OperatingFrequency',fc);
carmotion = phased.Platform('InitialPosition',[car_dist;0;0.5],...
'Velocity',[car_speed;0;0]);
channel = phased.FreeSpace('PropagationSpeed',c,...
'OperatingFrequency',fc,'SampleRate',fs,'TwoWayPropagation',true);
The rest of the radar system includes the transmitter, the receiver, and the antenna. This example
uses the parameters presented in [1]. Note that this example models only main components and omits
the effect from other components, such as coupler and mixer. In addition, for the sake of simplicity,
the antenna is assumed to be isotropic and the gain of the antenna is included in the transmitter and
the receiver.
rx_gain = 15+ant_gain; % in dB
rx_nf = 4.5; % in dB
transmitter = phased.Transmitter('PeakPower',tx_ppower,'Gain',tx_gain);
receiver = phased.ReceiverPreamp('Gain',rx_gain,'NoiseFigure',rx_nf,...
'SampleRate',fs);
Automotive radars are generally mounted on vehicles, so they are often in motion. This example
assumes the radar is traveling at a speed of 100 km/h along x-axis. So the target car is approaching
the radar at a relative speed of 4 km/h.
radar_speed = 100*1000/3600;
radarmotion = phased.Platform('InitialPosition',[0;0;0.5],...
'Velocity',[radar_speed;0;0]);
As briefly mentioned in earlier sections, an FMCW radar measures the range by examining the beat
frequency in the dechirped signal. To extract this frequency, a dechirp operation is performed by
mixing the received signal with the transmitted signal. After the mixing, the dechirped signal
contains only individual frequency components that correspond to the target range.
In addition, even though it is possible to extract the Doppler information from a single sweep, the
Doppler shift is often extracted among several sweeps because within one pulse, the Doppler
frequency is indistinguishable from the beat frequency. To measure the range and Doppler, an FMCW
radar typically performs the following operations:
17-370
Automotive Adaptive Cruise Control Using FMCW Technology
2 The transmitter and the antenna amplify the signal and radiate the signal into space.
3 The signal propagates to the target, gets reflected by the target, and travels back to the radar.
4 The receiving antenna collects the signal.
5 The received signal is dechirped and saved in a buffer.
6 Once a certain number of sweeps fill the buffer, the Fourier transform is performed in both range
and Doppler to extract the beat frequency as well as the Doppler shift. One can then estimate the
range and speed of the target using these results. Range and Doppler can also be shown as an
image and give an intuitive indication of where the target is in the range and speed domain.
The next section simulates the process outlined above. A total of 64 sweeps are simulated and a
range Doppler response is generated at the end.
During the simulation, a spectrum analyzer is used to show the spectrum of each received sweep as
well as its dechirped counterpart.
specanalyzer = dsp.SpectrumAnalyzer('SampleRate',fs,...
'PlotAsTwoSidedSpectrum',true,...
'Title','Spectrum for received and dechirped signal',...
'ShowLegend',true);
rng(2012);
Nsweep = 64;
xr = complex(zeros(waveform.SampleRate*waveform.SweepTime,Nsweep));
for m = 1:Nsweep
% Update radar and target positions
[radar_pos,radar_vel] = radarmotion(waveform.SweepTime);
[tgt_pos,tgt_vel] = carmotion(waveform.SweepTime);
xr(:,m) = dechirpsig;
end
17-371
17 Featured Examples
From the spectrum scope, one can see that although the received signal is wideband (channel 1),
sweeping through the entire bandwidth, the dechirped signal becomes narrowband (channel 2).
Before estimating the value of the range and Doppler, it may be a good idea to take a look at the
zoomed range Doppler response of all 64 sweeps.
rngdopresp = phased.RangeDopplerResponse('PropagationSpeed',c,...
'DopplerOutput','Speed','OperatingFrequency',fc,'SampleRate',fs,...
'RangeMethod','FFT','SweepSlope',sweep_slope,...
'RangeFFTLengthSource','Property','RangeFFTLength',2048,...
'DopplerFFTLengthSource','Property','DopplerFFTLength',256);
clf;
plotResponse(rngdopresp,xr); % Plot range Doppler map
axis([-v_max v_max 0 range_max])
clim = caxis;
17-372
Automotive Adaptive Cruise Control Using FMCW Technology
From the range Doppler response, one can see that the car in front is a bit more than 40 m away and
appears almost static. This is expected because the radial speed of the car relative to the radar is only
4 km/h, which translates to a mere 1.11 m/s.
There are many ways to estimate the range and speed of the target car. For example, one can choose
almost any spectral analysis method to extract both the beat frequency and the Doppler shift. This
example uses the root MUSIC algorithm to extract both the beat frequency and the Doppler shift.
As a side note, although the received signal is sampled at 150 MHz so the system can achieve the
required range resolution, after the dechirp, one only needs to sample it at a rate that corresponds to
the maximum beat frequency. Since the maximum beat frequency is in general less than the required
sweeping bandwidth, the signal can be decimated to alleviate the hardware cost. The following code
snippet shows the decimation process.
Dn = fix(fs/(2*fb_max));
for m = size(xr,2):-1:1
xr_d(:,m) = decimate(xr(:,m),Dn,'FIR');
end
fs_d = fs/Dn;
To estimate the range, firstly, the beat frequency is estimated using the coherently integrated sweeps
and then converted to the range.
fb_rng = rootmusic(pulsint(xr_d,'coherent'),1,fs_d);
rng_est = beat2range(fb_rng,sweep_slope,c)
rng_est =
17-373
17 Featured Examples
42.9976
Second, the Doppler shift is estimated across the sweeps at the range where the target is present.
peak_loc = val2ind(rng_est,c/(fs_d*2));
fd = -rootmusic(xr_d(peak_loc,:),1,1/tm);
v_est = dop2speed(fd,lambda)/2
v_est =
1.0830
Note that both range and Doppler estimation are quite accurate.
One issue associated with linear FM signals, such as an FMCW signal, is the range Doppler coupling
effect. As discussed earlier, the target range corresponds to the beat frequency. Hence, an accurate
range estimation depends on an accurate estimate of beat frequency. However, the presence of
Doppler shift changes the beat frequency, resulting in a biased range estimation.
For the situation outlined in this example, the range error caused by the relative speed between the
target and the radar is
deltaR = rdcoupling(fd,sweep_slope,c)
deltaR =
-0.0041
Even though the current design is achieving the desired performance, one parameter warrants
further attention. In the current configuration, the sweep time is about 7 microseconds. Therefore,
the system needs to sweep a 150 MHz band within a very short period. Such an automotive radar
may not be able to meet the cost requirement. Besides, given the velocity of a car, there is no need to
make measurements every 7 microseconds. Hence, automotive radars often use a longer sweep time.
For example, the waveform used in [2] has the same parameters as the waveform designed in this
example except a sweep time of 2 ms.
A longer sweep time makes the range Doppler coupling more prominent. To see this effect, first
reconfigure the waveform to use 2 ms as the sweep time.
waveform_tr = clone(waveform);
release(waveform_tr);
tm = 2e-3;
waveform_tr.SweepTime = tm;
sweep_slope = bw/tm;
17-374
Automotive Adaptive Cruise Control Using FMCW Technology
deltaR =
-1.1118
A range error of 1.14 m can no longer be ignored and needs to be compensated. Naturally, one may
think of doing so following the same procedure outlined in earlier sections, estimating both range and
Doppler, figuring out the range Doppler coupling from the Doppler shift, and then remove the error
from the estimate.
Unfortunately this process doesn't work very well with the long sweep time. The longer sweep time
results in a lower sampling rate across the sweeps, thus reducing the radar's capability of
unambiguously detecting high speed vehicles. For instance, using a sweep time of 2 ms, the
maximum unambiguous speed the radar system can detect using the traditional Doppler processing is
v_unambiguous = dop2speed(1/(2*tm),lambda)/2
v_unambiguous =
0.4870
The unambiguous speed is only 0.48 m/s, which means that the relative speed, 1.11 m/s, cannot be
unambiguously detected. This means that not only the target car will appear slower in Doppler
processing, but the range Doppler coupling also cannot be correctly compensated.
One way to resolve such ambiguity without Doppler processing is to adopt a triangle sweep pattern.
Next section shows how the triangle sweep addresses the issue.
Triangular Sweep
In a triangular sweep, there are one up sweep and one down sweep to form one period, as shown in
the following figure.
The two sweeps have the same slope except different signs. From the figure, one can see that the
presence of Doppler frequency, , affects the beat frequencies ( and ) differently in up and
17-375
17 Featured Examples
down sweeps. Hence by combining the beat frequencies from both up and down sweep, the coupling
effect from the Doppler can be averaged out and the range estimate can be obtained without
ambiguity.
waveform_tr.SweepDirection = 'Triangle';
Now simulate the signal return. Because of the longer sweep time, fewer sweeps (16 vs. 64) are
collected before processing.
Nsweep = 16;
xr = helperFMCWSimulate(Nsweep,waveform_tr,radarmotion,carmotion,...
transmitter,channel,cartarget,receiver);
The up sweep and down sweep are processed separately to obtain the beat frequencies
corresponding to both up and down sweep.
fbu_rng = rootmusic(pulsint(xr(:,1:2:end),'coherent'),1,fs);
fbd_rng = rootmusic(pulsint(xr(:,2:2:end),'coherent'),1,fs);
Using both up sweep and down sweep beat frequencies simultaneously, the correct range estimate is
obtained.
rng_est =
42.9658
Moreover, the Doppler shift and the velocity can also be recovered in a similar fashion.
fd = -(fbu_rng+fbd_rng)/2;
v_est = dop2speed(fd,lambda)/2
v_est =
1.1114
The estimated range and velocity match the true values, 43 m and 1.11 m/s, very well.
Two-ray Propagation
To complete the discussion, in reality, the actual signal propagation between the radar and the target
vehicle is more complicated than what is modeled so far. For example, the radio wave can also arrive
at the target vehicle via reflections. A simple yet widely used model to describe such a multipath
scenario is a two-ray model, where the signal propagates from the radar to the target vehicle via two
paths, one direct path and one reflected path off the road, as shown in the following figure.
17-376
Automotive Adaptive Cruise Control Using FMCW Technology
The reflection off the road impacts the phase of the signal and the receiving signal at the target
vehicle is a coherent combination of the signals via the two paths. Same thing happens on the return
trip too where the reflected signal off the target vehicle travels back to the radar. Hence depending
on the distance between the vehicles, the signals from different paths may add constructively or
destructively, making signal strength fluctuating over time. Such fluctuation can pose some challenge
in the successive detection stage.
To showcase the multipath effect, next section uses the two ray channel model to propagate the
signal between the radar and the target vehicle.
txchannel = phased.TwoRayChannel('PropagationSpeed',c,...
'OperatingFrequency',fc,'SampleRate',fs);
rxchannel = phased.TwoRayChannel('PropagationSpeed',c,...
'OperatingFrequency',fc,'SampleRate',fs);
Nsweep = 64;
xr = helperFMCWTwoRaySimulate(Nsweep,waveform,radarmotion,carmotion,...
transmitter,txchannel,rxchannel,cartarget,receiver);
plotResponse(rngdopresp,xr); % Plot range Doppler map
axis([-v_max v_max 0 range_max]);
caxis(clim);
17-377
17 Featured Examples
With all settings remaining same, the comparison of the resulting range-Doppler map with two-ray
propagation and the range-Doppler map obtained before with a line of sight (LOS) propagation
channel suggests that the signal strength dropped almost 40 dB, which is significant. Therefore, such
effect must be considered during the design. One possible choice is to form a very sharp beam on the
vertical direction to null out the reflections.
Summary
This example shows how to use FMCW signal to perform range and Doppler estimation in an
automotive automatic cruise control application. The example also shows how to generate a range
Doppler map from the received signal and discussed how to use triangle sweep to compensate for the
range Doppler coupling effect for the FMCW signal. Finally, the effect on the signal level due to the
multipath propagation is discussed.
References
[1] Karnfelt, C. et al.. 77 GHz ACC Radar Simulation Platform, IEEE International Conferences on
Intelligent Transport Systems Telecommunications (ITST), 2009.
[2] Rohling, H. and M. Meinecke. Waveform Design Principle for Automotive Radar Systems,
Proceedings of CIE International Conference on Radar, 2001.
17-378
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
The following model shows an end-to-end FMCW radar system. The system setup is similar to the
MATLAB “Automotive Adaptive Cruise Control Using FMCW Technology” on page 17-367 example.
The only difference between this model and the aforementioned example is that this model has an
FMCW waveform sweep that is symmetric around the carrier frequency.
The figure shows the signal flow in the model. The Simulink blocks that make up the model are
divided into two major sections, the Radar section and the Channel and Target section. The shaded
block on the left represents the radar system. In this section, the FMCW signal is generated and
17-379
17 Featured Examples
transmitted. This section also contains the receiver that captures the radar echo and performs a
series of operations, such as dechirping and pulse integration, to estimate the target range. The
shaded block on the right models the propagation of the signal through space and its reflection from
the car. The output of the system, the estimated range in meters, is shown in the display block on the
left.
Radar
The radar system consists of a co-located transmitter and receiver mounted on a vehicle moving
along a straight road. It contains the signal processing components needed to extract the information
from the returned target echo.
• FMCW - Creates an FMCW signal. The FMCW waveform is a common choice in automotive radar,
because it provides a way to estimate the range using a continuous wave (CW) radar. The distance
is proportional to the frequency offset between the transmitted signal and the received echo. The
signal sweeps a bandwidth of 150 MHz.
• Transmitter - Transmits the waveform. The operating frequency of the transmitter is 77 GHz.
• Receiver Preamp - Receives the target echo and adds the receiver noise.
• Radar Platform - Simulates the radar vehicle trajectory.
• Signal Processing - Processes the received signal and estimates the range of the target
vehicle.
Within the Radar, the target echo goes through several signal processing steps before the target
range can be estimated. The signal processing subsystem consists of two high-level processing
stages.
• Stage 1: The first stage dechirps the received signal by multiplying it with the transmitted signal.
This operation produces a beat frequency between the target echo and the transmitted signal. The
target range is proportional to the beat frequency. This operation also reduces the bandwidth
required to process the signal. Next, 64 sweeps are buffered to form a datacube. The datacube
dimensions are fast-time versus slow-time. This datacube is then passed to a Matrix Sum block,
where the slow-time samples are integrated to boost the signal-to-noise ratio. The data is then
passed to the Range Response block, which performs an FFT operation to convert the beat
frequency to range. Radar signal processing lends itself well to parallelization, so the radar data is
then partitioned in range into 5 parts prior to further processing.
• Stage 2: The second stage consists of 5 parallel processing chains for the detection and estimation
of the target.
17-380
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
Within Stage 2, each Detection and Estimation Chain block consists of 3 processing steps.
• Detection Processing: The radar data is first passed to a 1-dimensional cell-averaging (CA)
constant false alarm rate (CFAR) detector that operates in the range dimension. This block
identifies detections or hits.
• Detection Clustering: The detections are then passed to the next step where they are aggregated
into clusters using the Density-Based Spatial Clustering of Applications with Noise algorithm in
the DBSCAN Clusterer block. The clustering block clusters the detections in range using the
detections identified by the CA CFAR block.
• Parameter Estimation: After detections and clusters are identified, the last step is the Range
Estimator block. This step estimates the range of the detected targets in the radar data.
The Channel and Target part of the model simulates the signal propagation and reflection off the
target vehicle.
17-381
17 Featured Examples
• Channel - Simulates the signal propagation between the radar vehicle and the target vehicle. The
channel can be set as either a line-of-sight free space channel or a two-ray channel where the
signal arrives at the receiver via both the direct path and the reflected path off the ground. The
default choice is a free space channel.
• Car - Reflects the incident signal and simulates the target vehicle trajectory. The subsystem,
shown below, consist of two parts: a target model to simulate the echo and a platform model to
simulate the dynamics of the target vehicle.
In the Car subsystem, the target vehicle is modeled as a point target with a specified radar cross
section. The radar cross section is used to measure how much power can be reflected from a target.
In this model's scenario, the radar vehicle starts at the origin, traveling at 100 km/h (27.8 m/s), while
the target vehicle starts at 43 meters in front of the radar vehicle, traveling at 96 km/h (26.7 m/s).
The positions and velocities of both the radar and the target vehicles are used in the propagation
channel to calculate the delay, Doppler, and signal loss.
Several dialog parameters of the model are calculated by the helper function helperslexFMCWParam.
To open the function from the model, click on Modify Simulation Parameters block. This
function is executed once when the model is loaded. It exports to the workspace a structure whose
fields are referenced by the dialogs. To modify any parameters, either change the values in the
17-382
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
structure at the command prompt or edit the helper function and rerun it to update the parameter
structure.
The spectrogram of the FMCW signal below shows that the signal linearly sweeps a span of 150 MHz
approximately every 7 microseconds. This waveform provides a resolution of approximately 1 meter.
The spectrum of the dechirped signal is shown below. The figure indicates that the beat frequency
introduced by the target is approximately 100 kHz. Note that after dechirp, the signal has only a
single frequency component. The resulting range estimate calculated from this beat frequency, as
displayed in the overall model above, is well within the 1 meter range resolution.
17-383
17 Featured Examples
However, this result is obtained with the free space propagation channel. In reality, the propagation
between vehicles often involves multiple paths between the transmitter and the receiver. Therefore,
signals from different paths may add either constructively or destructively at the receiver. The
following section sets the propagation to a two-ray channel, which is the simplest multipath channel.
Run the simulation and observe the spectrum of the dechirped signal.
17-384
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
Note that there is no longer a dominant beat frequency, because at this range, the signal from the
direct path and the reflected path combine destructively, thereby canceling each other out. This can
also be seen from the estimated range, which no longer matches the ground truth.
The example model below shows a similar end-to-end FMCW radar system that simulates 2 targets.
This example estimates both the range and the speed of the detected targets.
17-385
17 Featured Examples
The model is essentially the same as the previous example with 4 primary differences. This model:
Radar
This model uses range-Doppler joint processing in the signal processing subsystem. Joint processing
in the range-Doppler domain makes it possible to estimate the Doppler across multiple sweeps and
then to use that information to resolve the range-Doppler coupling, resulting in better range
estimates.
The stages that make up the signal processing subsystem are similar to the prior example. Each stage
performs the following actions.
17-386
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
• Stage 1: The first stage again performs dechirping and assembly of a datacube with 64 sweeps.
The datacube is then passed to the Range-Doppler Response block to compute the range-
Doppler map of the input signal. The datacube is then passed to the Range Subset block, which
obtains a subset of the datacube that will undergo further processing.
• Stage 2: The second stage is where the detection processing occurs. The detector in this example
is the CA CFAR 2-D block that operates in both the range and Doppler dimensions.
• Stage 3: Clustering occurs in the DBSCAN Clusterer block using both the range and Doppler
dimensions. Clustering results are then displayed by the Plot Clusters block.
• Stage 4: The fourth and final stage estimates the range and speed of the targets from the range-
Doppler map using the Range Estimator and Doppler Estimator blocks, respectively.
As mentioned in the beginning of the example, FMCW radar uses a frequency shift to derive the
range of the target. However, the motion of the target can also introduce a frequency shift due to the
Doppler effect. Therefore, the beat frequency has both range and speed information coupled.
Processing range and Doppler at the same time lets us remove this ambiguity. As long as the sweep is
fast enough so that the target remains in the same range gate for several sweeps, the Doppler can be
calculated across multiple sweeps and then be used to correct the initial range estimates.
There are now two target vehicles in the scene, labeled as Car and Truck, and each vehicle has an
associated propagation channel. The Car starts 50 meters in front of the radar vehicle and travels at
a speed of 60 km/h (16.7 m/s). The Truck starts at 150 meters in front of the radar vehicle and travels
at a speed of 130 km/h (36.1 m/s).
Several dialog parameters of the model are calculated by the helper function
helperslexFMCWMultiTargetsParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The FMCW signal shown below is the same as in the previous model.
17-387
17 Featured Examples
17-388
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
The map correctly shows two targets: one at 50 meters and one at 150 meters. Because the radar can
only measure the relative speed, the expected speed values for these two vehicles are 11.1 m/s and
-8.3 m/s, respectively, where the negative sign indicates that the Truck is moving away from the radar
vehicle. The exact speed estimates may be difficult to infer from the range-Doppler map, but the
estimated ranges and speeds are shown numerically in the display blocks in the model on the left. As
can be seen, the speed estimates match the expected values well.
To be able to do joint range and speed estimation using the above approach, the sweep needs to be
fairly fast to ensure the vehicle is approximately stationary during the sweep. This often translates to
higher hardware cost. MFSK is a new waveform designed specifically for automotive radar so that it
can achieve simultaneous range and speed estimation with longer sweeps.
The example below shows how to use MFSK waveform to perform the range and speed estimation.
The scene setup is the same as the previous model.
17-389
17 Featured Examples
The primary differences between this model and the previous are in the waveform block and the
signal processing subsystem. The MFSK waveform essentially consists of two FMCW sweeps with a
fixed frequency offset. The sweep in this case happens at discrete steps. From the parameters of the
MFSK waveform block, the sweep time can be computed as the product of the step time and the
number of steps per sweep. In this example, the sweep time is slightly over 2 ms, which is several
orders larger than the 7 microseconds for the FMCW used in the previous model. For more
information on the MFSK waveform, see the “Simultaneous Range and Speed Estimation Using
MFSK Waveform” on page 17-143 example.
17-390
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
The signal processing subsystem describes how the signal gets processed for the MFSK waveform.
The signal is first sampled at the end of each step and then converted to the frequency domain via an
FFT. A 1-dimensional CA CFAR detector is used to identify the peaks, which correspond to targets, in
the spectrum. Then the frequency at each peak location and the phase difference between the two
sweeps are used to estimate the range and speed of the target vehicles.
17-391
17 Featured Examples
Several dialog parameters of the model are calculated by the helper function
helperslexMFSKMultiTargetsParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The estimated results are shown in the model, matching the results obtained from the previous
model.
One can improve the angular resolution of the radar by using an array of antennas. This example
shows how to resolve three target vehicles traveling in separate lanes ahead of a vehicle carrying an
antenna array.
17-392
Automotive Adaptive Cruise Control Using FMCW and MFSK Technology
In this scenario, the radar is traveling in the center lane of a highway at 100 km/h (27.8 m/s). The
first target vehicle is traveling 20 meters ahead in the same lane as the radar at 85 km/h (23.6 m/s).
The second target vehicle is traveling at 125 km/h (34.7 m/s) in the right lane and is 40 meters ahead.
The third target vehicle is traveling at 110 km/h (30.6 m/s) in the left lane and is 80 meters ahead.
The antenna array of the radar vehicle is a 4-element uniform linear array (ULA).
The origin of the scenario coordinate system is at the radar vehicle. The ground truth range, speed,
and angle of the target vehicles with respect to the radar are
Range (m) Speed (m/s) Angle (deg)
---------------------------------------------------------------
Car 1 20 4.2 0
Car 2 40.05 -6.9 -2.9
Car 3 80.03 -2.8 1.4
The signal processing subsystem now includes direction of arrival estimation in addition to the range
and Doppler processing.
17-393
17 Featured Examples
The processing is very similar to the previously discussed FMCW Multiple Target model. However, in
this model, there are 5 stages instead of 4.
• Stage 1: Similar to the previously discussed FMCW Multiple Target model, this stage performs
dechirping, datacube formation, and range-Doppler processing. The datacube is then passed to
the Range Subset block, thereby obtaining the subset of the datacube that will undergo further
processing.
• Stage 2: The second stage is the Phase Shift Beamformer block where beamforming occurs
based on the specified look angles that are defined in the parameter helper function
helperslexFMCWMultiTargetsDOAParam.
• Stage 3: The third stage is where the detection processing occurs. The detector in this example is
again the CA CFAR 2-D block that operates in both the range and Doppler dimensions.
• Stage 4: Clustering occurs in the DBSCAN Clusterer block using the range, Doppler, and angle
dimensions. Clustering results are then displayed by the Plot Clusters block.
• Stage 5: The fourth and final stage estimates the range and speed of the targets from the range-
Doppler map using the Range Estimator and Doppler Estimator blocks, respectively. In
addition, direction of arrival (DOA) estimation is performed using a custom block that features an
implementation of the Phased Array System Toolbox Root MUSIC Estimator.
Several dialog parameters of the model are calculated by the helper function
helperslexFMCWMultiTargetsDOAParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The estimated results are shown in the model and match the expected values well.
Summary
The first model shows how to use an FMCW radar to estimate the range of a target vehicle. The
information derived from the echo, such as the distance to the target vehicle, are necessary inputs to
a complete automotive ACC system.
The example also discusses how to perform combined range-Doppler processing to derive both range
and speed information of target vehicles. However, it is worth noting that when the sweep time is
long, the system capability for estimating the speed is degraded, and it is possible that the joint
processing can no longer provide accurate compensation for range-Doppler coupling. More
discussion on this topic can be found in the MATLAB “Automotive Adaptive Cruise Control Using
FMCW Technology” on page 17-367 example.
The following model shows how to perform the same range and speed estimation using an MFSK
waveform. This waveform can achieve the joint range and speed estimation with longer sweeps, thus
reducing the hardware requirements.
The last model is an FMCW radar featuring an antenna array that performs range, speed, and angle
estimation.
17-394
Radar Signal Simulation and Processing for Automated Driving
Introduction
You can model vehicle motion by using the drivingScenario object from Automated Driving
Toolbox. The vehicle ground truth can then be used to generate synthetic sensor detections, which
you can track by using the multiObjectTracker object. For an example of this workflow, see
“Sensor Fusion Using Synthetic Radar and Vision Data in Simulink” (Automated Driving Toolbox). The
automotive radar used in this example uses a statistical model that is parameterized according to
high-level radar specifications. The generic radar architecture modeled in this example does not
include specific antenna configurations, waveforms, or unique channel propagation characteristics.
When designing an automotive radar, or when a radar's specific architecture is known, using a radar
model that includes this additional information is recommended.
Phased Array System Toolbox enables you to evaluate different radar architectures. You can explore
different transmit and receive array configurations, waveforms, and signal processing chains. You can
also evaluate your designs against different channel models to assess their robustness to different
environmental conditions. This modeling helps you to identify the specific design that best fits your
application requirements.
In this example, you learn how to define a radar model from a set of system requirements for a long-
range radar. You then simulate a driving scenario to generate detections from your radar model. A
tracker is used to process these detections to generate precise estimates of the position and velocity
of the vehicles detected by your automotive radar.
The radar parameters are defined for the frequency-modulated continuous wave (FMCW) waveform,
as described in the example “Automotive Adaptive Cruise Control Using FMCW Technology” on page
17-367. The radar operates at a center frequency of 77 GHz. This frequency is commonly used by
automotive radars. For long-range operation, the radar must detect vehicles at a maximum range of
100 meters in front of the ego vehicle. The radar is required to resolve objects in range that are at
least 1 meter apart. Because this is a forward-facing radar application, the radar also needs to handle
targets with large closing speeds, as high as 230 km/hr.
The radar is designed to use an FMCW waveform. These waveforms are common in automotive
applications because they enable range and Doppler estimation through computationally efficient FFT
operations.
17-395
17 Featured Examples
% Set the sampling rate to satisfy both the range and velocity requirements
% for the radar
sweepSlope = bw/tm; % FMCW sweep slope (Hz/s)
fbeatMax = range2beat(rangeMax,sweepSlope,c); % Maximum beat frequency (Hz)
% Configure the FMCW waveform using the waveform parameters derived from
% the long-range requirements
waveform = phased.FMCWWaveform('SweepTime',tm,'SweepBandwidth',bw,...
'SampleRate',fs,'SweepDirection','Up');
if strcmp(waveform.SweepDirection,'Down')
sweepSlope = -sweepSlope;
end
Nsweep = 192;
The radar uses a uniform linear array (ULA) to transmit and receive the radar waveforms. Using a
linear array enables the radar to estimate the azimuthal direction of the reflected energy received
from the target vehicles. The long-range radar needs to detect targets across a coverage area that
spans 15 degrees in front of the ego vehicle. A 6-element receive array satisfies this requirement by
providing a 16 degree half-power beamwidth. On transmit, the radar uses only a single array element,
enabling it to cover a larger area than on receive.
hpbw = 16.3636
17-396
Radar Signal Simulation and Processing for Automated Driving
Estimate the direction-of-arrival of the received signals using a root MUSIC estimator. A beamscan is
also used for illustrative purposes to help spatially visualize the distribution of the received signal
energy.
Model the radar transmitter for a single transmit channel, and model a receiver preamplifier for each
receive channel, using the parameters defined in the example “Automotive Adaptive Cruise Control
Using FMCW Technology” on page 17-367.
% Waveform transmitter
transmitter = phased.Transmitter('PeakPower',txPkPower,'Gain',txGain);
% Receiver preamplifier
receiver = phased.ReceiverPreamp('Gain',rxGain,'NoiseFigure',rxNF,...
'SampleRate',fs);
The radar collects multiple sweeps of the waveform on each of the linear phased array antenna
elements. These collected sweeps form a data cube, which is defined in “Radar Data Cube”. These
sweeps are coherently processed along the fast- and slow-time dimensions of the data cube to
estimate the range and Doppler of the vehicles.
Use the phased.RangeDopplerResponse object to perform the range and Doppler processing on
the radar data cubes. Use a Hanning window to suppress the large sidelobes produced by the
vehicles when they are close to the radar.
17-397
17 Featured Examples
'DopplerOutput','Speed','SweepSlope',sweepSlope,...
'RangeFFTLengthSource','Property','RangeFFTLength',Nr,...
'RangeWindow','Hann',...
'DopplerFFTLengthSource','Property','DopplerFFTLength',Nd,...
'DopplerWindow','Hann',...
'PropagationSpeed',c,'OperatingFrequency',fc,'SampleRate',fs);
Identify detections in the processed range and Doppler data by using a constant false alarm rate
(CFAR) detector. CFAR detectors estimate the background noise level of the received radar data.
Detections are found at locations where the signal power exceeds the estimated noise floor by a
certain threshold. Low threshold values result in a higher number of reported false detections due to
environmental noise. Increasing the threshold produces fewer false detections, but also reduces the
probability of detection of an actual target in the scenario. For more information on CFAR detection,
see the example “Constant False Alarm Rate (CFAR) Detection” on page 17-279.
% Perform CFAR processing over all of the range and Doppler cells
freqs = ((0:Nr-1)'/Nr-0.5)*fs;
rnggrid = beat2range(freqs,sweepSlope);
iRngCUT = find(rnggrid>0);
iRngCUT = iRngCUT((iRngCUT>=nCUTRng)&(iRngCUT<=Nr-nCUTRng+1));
iDopCUT = nCUTDop:(Nd-nCUTDop+1);
[iRng,iDop] = meshgrid(iRngCUT,iDopCUT);
idxCFAR = [iRng(:) iDop(:)]';
The root-mean-square (RMS) range resolution of the transmitted waveform is needed to compute the
variance of the range measurements. The Rayleigh range resolution for the long-range radar was
defined previously as 1 meter. The Rayleigh resolution is the minimum separation at which two
unique targets can be resolved. This value defines the distance between range resolution cells for the
radar. However, the variance of the target within a resolution cell is determined by the waveform's
RMS resolution. For an LFM chirp waveform, the relationship between the Rayleigh resolution and
the RMS resolution is given by [1].
σRMS = 12ΔRayleigh
17-398
Radar Signal Simulation and Processing for Automated Driving
where σRMS is the RMS range resolution and ΔRayleigh is the Rayleigh range resolution.
The variance of the Doppler measurements depends on the number of sweeps processed.
Now, create the range and Doppler estimation objects using the parameters previously defined.
rmsRng = sqrt(12)*rangeRes;
rngestimator = phased.RangeEstimator('ClusterInputPort',true,...
'VarianceOutputPort',true,'NoisePowerSource','Input port',...
'RMSResolution',rmsRng);
dopestimator = phased.DopplerEstimator('ClusterInputPort',true,...
'VarianceOutputPort',true,'NoisePowerSource','Input port',...
'NumPulses',Nsweep);
To further improve the precision of the estimated vehicle locations, pass the radar's detections to a
tracker. Configure the track to use an extended Kalman filter (EKF), which converts the spherical
radar measurements into the ego vehicle's Cartesian coordinate frame. Also configure the tracker to
use constant velocity dynamics for the detected vehicles. By comparing vehicle detections over
multiple measurement time intervals, the tracker further improves the accuracy of the vehicle
positions and provides vehicle velocity estimates.
tracker = multiObjectTracker('FilterInitializationFcn',@initcvekf,...
'AssignmentThreshold',50);
Use the free space channel to model the propagation of the transmitted and received radar signals.
channel = phased.FreeSpace('PropagationSpeed',c,'OperatingFrequency',fc,...
'SampleRate',fs,'TwoWayPropagation',true);
In a free space model, the radar energy propagates along a direct line-of-sight between the radar and
the target vehicles, as shown in the following illustration.
Create a highway driving scenario with three vehicles traveling in the vicinity of the ego vehicle. The
vehicles are modeled as point targets, and have different velocities and positions defined in the
driving scenario. The ego vehicle is moving with a velocity of 80 km/hr and the other three cars are
moving at 110 km/hr, 100 km/hr, and 130 km/hr respectively. For details on modeling a driving
scenario see the example “Create Actor and Vehicle Trajectories Programmatically” (Automated
Driving Toolbox). The radar sensor is mounted on the front of the ego vehicle.
17-399
17 Featured Examples
The following loop uses the drivingScenario object to advance the vehicles in the scenario. At
every simulation time step, a radar data cube is assembled by collecting 192 sweeps of the radar
waveform. The assembled data cube is then processed in range and Doppler. The range and Doppler
processed data is then beamformed, and CFAR detection is performed on the beamformed data.
Range, radial speed, and direction of arrival measurements are estimated for the CFAR detections.
These detections are then assembled into objectDetection objects, which are then processed by
the multiObjectTracker object.
tgtProfiles = actorProfiles(scenario);
tgtProfiles = tgtProfiles(2:end);
tgtHeight = [tgtProfiles.Height];
17-400
Radar Signal Simulation and Processing for Automated Driving
tgtsig = pointTgts(txsig,tgtAngs);
% Detect targets
Xpow = abs(Xbf).^2;
[detidx,noisepwr] = cfar(Xpow,idxCFAR);
% Cluster detections
clusterIDs = helperAutoDrivingRadarSigProc('Cluster Detections',detidx);
[rngest,rngvar] = rngestimator(Xbf,rnggrid,detidx,noisepwr,clusterIDs);
rngvar = rngvar+radarParams.RMSBias(2)^2;
[rsest,rsvar] = dopestimator(Xbf,dopgrid,detidx,noisepwr,clusterIDs);
17-401
17 Featured Examples
% Track detections
tracks = tracker(dets,time);
% Update displays
helperAutoDrivingRadarSigProc('Update Display',egoCar,dets,tracks,...
dopgrid,rnggrid,Xbf,beamscan,Xrngdop);
% Publish snapshot
helperAutoDrivingRadarSigProc('Publish Snapshot',time>=1.1);
The previous figure shows the radar detections and tracks for the 3 target vehicles at 1.1 seconds of
simulation time. The plot on the upper-left side shows the chase camera view of the driving scenario
from the perspective of the ego vehicle (shown in blue). For reference, the ego vehicle is traveling at
80 km/hr and the other three cars are traveling at 110 km/hr (orange car), 100 km/hr (yellow car),
and 130 km/hr (purple car).
The right side of the figure shows the bird's-eye plot, which presents a "top down" perspective of the
scenario. All of the vehicles, detections, and tracks are shown in the ego vehicle's coordinate
reference frame. The estimated signal-to-noise ratio (SNR) for each radar measurement is printed
next to each detection. The vehicle location estimated by the tracker is shown in the plot using black
squares with text next to them indicating each track's ID. The tracker's estimated velocity for each
vehicle is shown as a black line pointing in the direction of the vehicle's velocity. The length of the
17-402
Radar Signal Simulation and Processing for Automated Driving
line corresponds to the estimated speed, with longer lines denoting vehicles with higher speeds
relative to the ego vehicle. The purple car's track (ID2) has the longest line while the yellow car's
track (ID1) has the shortest line. The tracked speeds are consistent with the modeled vehicle speeds
previously listed.
The two plots on the lower-left side show the radar images generated by the signal processing. The
upper plot shows how the received radar echos from the target vehicles are distributed in range and
radial speed. Here, all three vehicles are observed. The measured radial speeds correspond to the
velocities estimated by the tracker, as shown in the bird's-eye plot. The lower plot shows how the
received target echos are spatially distributed in range and angle. Again, all three targets are
present, and their locations match what is shown in the bird's-eye plot.
Due to its close proximity to the radar, the orange car can still be detected despite the large
beamforming losses due to its position well outside of the beam's 3 dB beamwidth. These detections
have generated a track (ID3) for the orange car.
The previous driving scenario simulation used free space propagation. This is a simple model that
models only direct line-of-sight propagation between the radar and each of the targets. In reality, the
radar signal propagation is much more complex, involving reflections from multiple obstacles before
reaching each target and returning back to the radar. This phenomenon is known as multipath
propagation. The following illustration shows one such case of multipath propagation, where the
signal impinging the target is coming from two directions: line-of-sight and a single bounce from the
road surface.
The overall effect of multipath propagation is that the received radar echoes can interfere
constructively and destructively. This constructive and destructive interference results from path
length differences between the various signal propagation paths. As the distance between the radar
and the vehicles changes, these path length differences also change. When the differences between
these paths result in echos received by the radar that are almost 180 degrees out of phase, the echos
destructively combine, and the radar makes no detection for that range.
Replace the free space channel model with a two-ray channel model to demonstrate the propagation
environment shown in the previous illustration. Reuse the remaining parameters in the driving
scenario and radar model, and run the simulation again.
% Create two-ray propagation channels. One channel is used for the transmit
17-403
17 Featured Examples
% Run the simulation again, now using the two-ray channel model
metrics2Ray = helperAutoDrivingRadarSigProc('Run Simulation',...
c,fc,rangeMax,vMax,waveform,Nsweep,... % Waveform parameters
transmitter,radiator,collector,receiver,... % Hardware models
rngdopresp,beamformer,cfar,idxCFAR,... % Signal processing
rngestimator,dopestimator,doaest,beamscan,tracker,... % Estimation
txchannel,rxchannel); % Propagation channel models
The previous figure shows the chase plot, bird's-eye plot, and radar images at 1.1 seconds of
simulation time, just as was shown for the free space channel propagation scenario. Comparing these
two figures, observe that for the two-ray channel, no detection is present for the purple car at this
simulation time. This detection loss is because the path length differences for this car are
destructively interfering at this range, resulting in a total loss of detection.
Plot the SNR estimates generated from the CFAR processing against the purple car's range estimates
from the free space and two-ray channel simulations.
helperAutoDrivingRadarSigProc('Plot Channels',metricsFS,metrics2Ray);
17-404
Radar Signal Simulation and Processing for Automated Driving
As the car approaches a range of 72 meters from the radar, a large loss in the estimated SNR from
the two-ray channel is observed with respect to the free space channel. It is near this range that the
multipath interference combines destructively, resulting in a loss in signal detections. However,
observe that the tracker is able to coast the track during these times of signal loss and provide a
predicted position and velocity for the purple car.
Summary
This example demonstrated how to model an automotive radar's hardware and signal processing
using Phased Array System Toolbox. You also learned how to integrate this radar model with the
Automated Driving Toolbox driving scenario simulation. First you generated synthetic radar
detections. Then you processed these detections further by using a tracker to generate precise
position and velocity estimates in the ego vehicle's coordinate frame. Finally, you learned how to
simulate multipath propagation effects by using the phased.TwoRayChannel model provided in the
Phased Array System Toolbox.
The presented workflow enables you to understand how your radar architecture design decisions
impact higher-level system requirements. Using this workflow enables you select a radar design that
satisfies your unique application requirements.
Reference
[1] Richards, Mark. Fundamentals of Radar Signal Processing. New York: McGraw Hill, 2005.
17-405
17 Featured Examples
Introduction
Antenna arrays have become part of the standard configuration in 5G wireless communication
systems. Because there are multiple elements in an antenna array, such wireless communications
systems are often referred to as multiple input multiple output (MIMO) systems. Antenna arrays can
help improve the SNR by exploring the redundancy across the multiple transmit and receive
channels. They also make it possible to reuse the spatial information in the system to improve the
coverage.
For this example, assume the system is deployed at 60 GHz, which is a frequency being considered
for the 5G system.
rng(6466);
With no loss in generality, place the transmitter at the origin and place the receiver approximately 1.6
km away.
txcenter = [0;0;0];
rxcenter = [1500;500;0];
Throughout this example, the scatteringchanmtx function will be used to create a channel matrix
for different transmit and receive array configurations. The function simulates multiple scatterers
between the transmit and receive arrays. The signal travels from the transmit array to all the
scatterers first and then bounces off the scatterers and arrives at the receive array. Therefore, each
scatterer defines a signal path between the transmit and the receive array and the resulting channel
matrix describes a multipath environment. The function works with antenna arrays of arbitrary size
at any designated frequency band.
The simplest wireless channel is a line of sight (LOS) propagation. Although simple, such channel can
often be found in rural areas. Adopting an antenna array under such situation can improve the signal
to noise ratio at the receiver and in turn improve the communication link's bit error rate (BER).
Before discussing the performance of a MIMO system, it is useful to build a baseline with a single
input single output (SISO) communication system. A SISO LOS channel has a direct path from the
transmitter to the receiver. Such a channel can be modeled as a special case of the multipath
channel.
17-406
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
[~,txang] = rangeangle(rxcenter,txcenter);
[~,rxang] = rangeangle(txcenter,rxcenter);
txsipos = [0;0;0];
rxsopos = [0;0;0];
Using BPSK modulation, the bit error rate (BER) for such a SISO channel can be plotted as
Nsamp = 1e6;
x = randi([0 1],Nsamp,1);
ebn0_param = -10:2:10;
Nsnr = numel(ebn0_param);
ber_siso = helperMIMOBER(sisochan,x,ebn0_param)/Nsamp;
helperBERPlot(ebn0_param,ber_siso);
legend('SISO')
With the baseline established for a SISO system, this section focuses on the single input multiple
output (SIMO) system. In such system, there is one transmit antenna but multiple receive antennas.
Again, it is assumed that there is a direct path between the transmitter and the receiver.
17-407
17 Featured Examples
Assume the receive array is a 4-element ULA with half-wavelength spacing, then the SIMO channel
can be modeled as
rxarray = phased.ULA('NumElements',4,'ElementSpacing',lambda/2);
rxmopos = getElementPosition(rxarray)/lambda;
simochan = scatteringchanmtx(txsipos,rxmopos,txang,rxang,g);
In the SIMO system, because the received signals across receive array elements are coherent, it is
possible to steer the receive array toward the transmitter to improve the SNR. Note that this assumes
that the signal incoming direction is known to the receiver. In reality, the angle is often obtained
using direction of arrival estimation algorithms.
rxarraystv = phased.SteeringVector('SensorArray',rxarray,...
'PropagationSpeed',c);
wr = conj(rxarraystv(fc,rxang));
ber_simo = helperMIMOBER(simochan,x,ebn0_param,1,wr)/Nsamp;
helperBERPlot(ebn0_param,[ber_siso(:) ber_simo(:)]);
legend('SISO','SIMO')
The multiple input single output (MISO) system works in a similar way. In this case, the transmitter is
a 4-element ULA with half-wavelength spacing.
17-408
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
txarray = phased.ULA('NumElements',4,'ElementSpacing',lambda/2);
txmipos = getElementPosition(txarray)/lambda;
misochan = scatteringchanmtx(txmipos,rxsopos,txang,rxang,g);
A line of sight MISO system achieves best SNR when the transmitter has the knowledge of the
receiver and steers the beam toward the receiver. In addition, to do a fair comparison with the SISO
system, the total transmitter power should be the same under both situations.
txarraystv = phased.SteeringVector('SensorArray',txarray,...
'PropagationSpeed',c);
wt = txarraystv(fc,txang)';
ber_miso = helperMIMOBER(misochan,x,ebn0_param,wt,1)/Nsamp;
helperBERPlot(ebn0_param,[ber_siso(:) ber_simo(:) ber_miso(:)]);
legend('SISO','SIMO','MISO')
Note that with the pre-steering, the performance of MISO matches the performance of a SIMO
system, gaining 6 dB in SNR. It may not be as intuitive compared to the SIMO case because the total
transmit power does not increase. However, by replacing a single isotropic antenna with a 4-element
transmit array, a 6 dB gain is achieved.
Because a SIMO system provides an array gain from the received array and a MISO system provides
an array gain from the transmit array, a MIMO system with an LOS propagation can benefit from both
the transmit and receive array gain.
17-409
17 Featured Examples
Assume a MIMO system with a 4-element transmit array and a 4-element receive array.
mimochan = scatteringchanmtx(txmipos,rxmopos,txang,rxang,g);
To achieve the best SNR, the transmit array and the receive array need to be steered toward each
other. With this configuration, the BER curve can be computed as
wt = txarraystv(fc,txang)';
wr = conj(rxarraystv(fc,rxang));
ber_mimo = helperMIMOBER(mimochan,x,ebn0_param,wt,wr)/Nsamp;
helperBERPlot(ebn0_param,[ber_siso(:) ber_simo(:) ber_mimo(:)]);
legend('SISO','SIMO','MIMO')
As expected, the BER curve shows that both the transmit array and the receive array contributes a 6
dB array gain, resulting in a total gain of 12 dB over the SISO case.
All the channels in the previous sections are line-of-sight channels. Although such channels are found
in some wireless communication systems, in general wireless communications occurs in multipath
fading environments. The rest of this example explores how using arrays can help in a multipath
environment.
Assume there are 10 randomly placed scatterers in the channel, then there will be 10 paths from the
transmitter to the receiver, as illustrated in the following figure.
17-410
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
Nscat = 10;
[~,~,~,scatpos] = ...
helperComputeRandomScatterer(txcenter,rxcenter,Nscat);
helperPlotSpatialMIMOScene(txsipos,rxsopos,...
txcenter,rxcenter,scatpos);
For simplicity, assume that signals traveling along all paths arrive within the same symbol period so
the channel is frequency flat.
To simulate the BER curve for a fading channel, the channel needs to change over time. Assume we
have 1000 frames and each frame has 10000 bits. The baseline SISO multipath channel BER curve is
constructed as
Nframe = 1e3;
Nbitperframe = 1e4;
Nsamp = Nframe*Nbitperframe;
x = randi([0 1],Nbitperframe,1);
nerr = zeros(1,Nsnr);
for m = 1:Nframe
sisompchan = scatteringchanmtx(txsipos,rxsopos,Nscat);
wr = sisompchan'/norm(sisompchan);
nerr = nerr+helperMIMOBER(sisompchan,x,ebn0_param,1,wr);
end
17-411
17 Featured Examples
ber_sisomp = nerr/Nsamp;
helperBERPlot(ebn0_param,[ber_siso(:) ber_sisomp(:)]);
legend('SISO LOS','SISO Multipath');
Compared to the BER curve derived from an LOS channel, the BER falls off much slower with the
increase of energy per bit to noise power spectral density ratio (Eb/N0) due to the fading caused by
the multipath propagation.
As more receive antennas are used in the receive array, more copies of the received signals are
available at the receiver. Again, assume a 4-element ULA at the receiver.
The optimal combining weights can be derived by matching the channel response. Such a combining
scheme is often termed as maximum ratio combining (MRC). Although in theory such scheme
requires the knowledge of the channel, in practice the channel response can often be estimated at the
receive array.
nerr = zeros(1,Nsnr);
for m = 1:Nframe
simompchan = scatteringchanmtx(txsipos,rxmopos,Nscat);
wr = simompchan'/norm(simompchan);
nerr = nerr+helperMIMOBER(simompchan,x,ebn0_param,1,wr);
end
ber_simomp = nerr/Nsamp;
17-412
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
helperBERPlot(ebn0_param,[ber_sisomp(:) ber_simomp(:)]);
legend('SISO Multipath','SIMO Multipath');
Note that the received signal is no longer weighted by a steering vector toward a specific direction.
Instead, the receiving array weights in this case are given by the complex conjugate of the channel
response. Otherwise it is possible that multipath could make the received signal out of phase with the
transmitted signal. This assumes that the channel response is known to the receiver. If the channel
response is unknown, pilot signals can be used to estimate the channel response.
It can be seen from the BER curve that not only the SIMO system provides some SNR gains compared
to the SISO system, but the slope of the BER curve of the SIMO system is also steeper compared to
the BER curve of the SISO system. The gain resulted from the slope change is often referred to as the
diversity gain.
Things get more interesting when there is multipath propagation in a MISO system. First, if the
channel is known to the transmitter, then the strategy to improve the SNR is similar to maximum
ratio combining. The signal radiated from each element of the transmit array should be weighted so
that the propagated signal can be added coherently at the receiver.
nerr = zeros(1,Nsnr);
for m = 1:Nframe
misompchan = scatteringchanmtx(txmipos,rxsopos,Nscat);
wt = misompchan'/norm(misompchan);
nerr = nerr+helperMIMOBER(misompchan,x,ebn0_param,wt,1);
17-413
17 Featured Examples
end
ber_misomp = nerr/Nsamp;
Note the transmit diversity gain shown in the BER curve. Compared to the SIMO multipath channel
case, the performance of a MISO multipath system is not as good. This is because there is only one
copy of the received signal yet the transmit power gets spread among multiple paths. It is certainly
possible to amplify the signal at the transmit side to achieve an equivalent gain, but that introduces
additional cost.
If the channel is not known to the transmitter, there are still ways to explore the diversity via space
time coding. For example, Alamouti code is a well known coding scheme that can be used to achieve
diversity gain when the channel is not known. Interested readers are encouraged to explore the
Introduction to MIMO Systems example in Communication System Toolbox™.
The rest of this example focuses on a multipath MIMO channel. In particular, this section illustrates
the case where the number of scatterers in the environment is larger than the number of elements in
the transmit and receive arrays. Such environment is often termed as a rich scattering environment.
Before diving into the specific performance measures, it is helpful to get a quick illustration of what
the channel looks like. The following helper function creates a 4x4 MIMO channel where both
transmitter and receiver are 4-element ULAs.
17-414
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
[txang,rxang,scatg,scatpos] = ...
helperComputeRandomScatterer(txcenter,rxcenter,Nscat);
mimompchan = scatteringchanmtx(txmipos,rxmopos,txang,rxang,scatg);
There are multiple paths available between the transmit array and the receive array because of the
existence of the scatterers. Each path consists of a single bounce off the corresponding scatterer.
helperPlotSpatialMIMOScene(txmipos,rxmopos,txcenter,rxcenter,scatpos);
There are two ways to take advantage of a MIMO channel. The first way is to explore the diversity
gain offered by a MIMO channel. Assuming the channel is known, the following figure shows the
diversity gain with the BER curve.
nerr = zeros(1,Nsnr);
for m = 1:Nframe
mimompchan = scatteringchanmtx(txmipos,rxmopos,Nscat);
[u,s,v] = svd(mimompchan);
wt = u(:,1)';
wr = v(:,1);
nerr = nerr+helperMIMOBER(mimompchan,x,ebn0_param,wt,wr);
end
ber_mimomp = nerr/Nsamp;
17-415
17 Featured Examples
Compare the BER curve from a MIMO channel with the BER curve obtained from a SIMO system. In
the multipath case, the diversity gain from a MIMO channel is not necessarily better than the
diversity gain provided by a SIMO channel. This is because to obtain the best diversity gain, only the
dominant mode in a MIMO channel is used yet there are other modes in the channel that are not
used. So is there an alternative way to utilize the channel?
The answer to the previous question lies in a scheme called spatial multiplexing. The idea behind
spatial multiplexing is that a MIMO multipath channel with a rich scatterer environment can send
multiple data streams simultaneously across the channel. For example, the channel matrix of a 4x4
MIMO channel becomes full rank because of the scatterers. This means that it is possible to send as
many as 4 data streams at once. The goal of spatial multiplexing is less about increasing the SNR but
more about increasing the information throughput.
The idea of spatial multiplexing is to separate the channel matrix to multiple mode so that the data
stream sent from different elements in the transmit array can be independently recovered from the
received signal. To achieve this, the data stream is precoded before the transmission and then
combined after the reception. The precoding and combining weights can be computed from the
channel matrix by
[wp,wc] = diagbfweights(mimompchan);
To see why the combination of the precoding and combining weights can help transmit multiple data
streams at the same time, examine the product of the weights and the channel matrix.
wp*mimompchan*wc
17-416
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
ans =
Note that the product is a diagonal matrix, which means that the information received by each
receive array element is simply a scaled version of the transmit array element. So it behaves like
multiple orthogonal subchannels within the original channel. The first subchannel corresponds to the
dominant transmit and receive directions so there is no loss in the diversity gain. In addition, it is now
possible to use other subchannels to carry information too, as shown in the BER curve for the first
two subchannels.
Ntx = 4;
Nrx = 4;
x = randi([0 1],Nbitperframe,Ntx);
nerr = zeros(Nrx,Nsnr);
for m = 1:Nframe
mimompchan = scatteringchanmtx(txmipos,rxmopos,Nscat);
[wp,wc] = diagbfweights(mimompchan);
nerr = nerr+helperMIMOMultistreamBER(mimompchan,x,ebn0_param,wp,wc);
end
ber_mimompdiag = nerr/Nsamp;
helperBERPlot(ebn0_param,[ber_sisomp(:) ber_mimomp(:)...
ber_mimompdiag(1,:).' ber_mimompdiag(2,:).']);
legend('SISO LOS','MIMO Multipath','MIMO Multipath Stream 1',...
'MIMO Multipath Stream 2');
17-417
17 Featured Examples
Although the second stream can not provide a gain as high as the first stream as it uses a less
dominant subchannel, the overall information throughput is improved. Therefore, next section
measures the performance by the channel capacity instead of the BER curve.
The most intuitive way to transmit data in a MIMO system is to uniformly split the power among
transmit elements. However, the capacity of the channel can be further improved if the channel is
known at the transmitter. In this case, the transmitter could use the waterfill algorithm to make the
choice of transmitting only in the subchannels where a satisfying SNR can be obtained. The following
figure shows the comparison of the system capacity between the two power distribution schemes. The
result confirms that the waterfill algorithm provides a better system capacity compared to the
uniform power distribution. The difference gets smaller when the system level SNR improves.
C_mimo_cu = zeros(1,Nsnr);
C_mimo_ck = zeros(1,Nsnr);
Ntrial = 1000;
for m = 1:Nsnr
for n = 1:Ntrial
mimompchan = scatteringchanmtx(txmipos,rxmopos,Nscat);
N0 = db2pow(-ebn0_param(m));
[~,~,~,~,cu] = diagbfweights(mimompchan,1,N0,'uniform');
[~,~,~,~,ck] = diagbfweights(mimompchan,1,N0,'waterfill');
C_mimo_cu(m) = C_mimo_cu(m)+cu;
C_mimo_ck(m) = C_mimo_ck(m)+ck;
end
end
C_mimo_cu = C_mimo_cu/Ntrial;
C_mimo_ck = C_mimo_ck/Ntrial;
17-418
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
plot(ebn0_param,C_mimo_cu(:),'-*',ebn0_param,C_mimo_ck(:),'-^');
xlabel('SNR (dB)');
ylabel('Capacity (bps/Hz)');
legend('Uniform Power Distribution','Waterfill Power Distribution');
grid on;
For more details on spatial multiplexing and its detection techniques, refer to the Spatial Multiplexing
example in Communication System Toolbox.
Finally, it is worth looking at how these different ways of using arrays relate to each other. Starting
from the LOS channel, as mentioned in the previous sections, the benefit provided by the array is an
improvement in the SNR.
[~,txang] = rangeangle(rxcenter,txcenter);
[~,rxang] = rangeangle(txcenter,rxcenter);
mimochan = scatteringchanmtx(txmipos,rxmopos,txang,rxang,1);
wt = txarraystv(fc,txang)';
wr = conj(rxarraystv(fc,rxang));
helperPlotSpatialMIMOScene(txmipos,rxmopos,txcenter,rxcenter,...
(txcenter+rxcenter)/2,wt,wr)
17-419
17 Featured Examples
It is clear from the sketch that in this case, the transmit and receive weights form two beams that
points to each other. Thus, the array gain is achieved by the beamforming technique. On the other
hand, if one tries to create a similar sketch for a MIMO channel, it looks like the following figure.
[txang,rxang,scatg,scatpos] = ...
helperComputeRandomScatterer(txcenter,rxcenter,Nscat);
mimompchan = scatteringchanmtx(txmipos,rxmopos,txang,rxang,scatg);
[wp,wc] = diagbfweights(mimompchan);
helperPlotSpatialMIMOScene(txmipos,rxmopos,txcenter,rxcenter,...
scatpos,wp(1,:),wc(:,1))
17-420
Improve SNR and Capacity of Wireless Communication Using Antenna Arrays
Note that the figure only depicts the pattern for the first data stream but nevertheless it is clear that
the pattern no longer necessarily has a dominant main beam. However, if the number of scatterers is
reduced to one, then the scene becomes
[txang,rxang,scatg,scatpos] = ...
helperComputeRandomScatterer(txcenter,rxcenter,1);
mimompchan = scatteringchanmtx(txmipos,rxmopos,txang,rxang,scatg);
[wp,wc] = diagbfweights(mimompchan);
helperPlotSpatialMIMOScene(txmipos,rxmopos,txcenter,rxcenter,...
scatpos,wp(1,:),wc(:,1))
17-421
17 Featured Examples
Therefore, the LOS channel case, or more precisely, the one scatterer case, can be considered as a
special case of the precoding. When there is only one path available between the transmit and receive
arrays, the precoding degenerates to a beamforming scheme.
Summary
This example explains how array processing can be used to improve the quality of a MIMO wireless
communication system. Depending on the nature of the channel, the arrays can be used to either
improve the SNR via array gain or diversity gain, or improve the capacity via spatial multiplexing.
The example also shows how to use functions like scatteringchanmtx and diagbfweights to
simulate those scenarios. For more information on MIMO systems modeling, interested readers may
refer to examples provided in Communications Toolbox™.
Reference
[1] David Tse and Pramod Viswanath, Fundamentals of Wireless Communications, Cambridge, 2005
17-422
Introduction to Hybrid Beamforming
Introduction
Modern wireless communication systems use spatial multiplexing to improve the data throughput
within the system in a scatterer rich environment. In order to send multiple data streams through the
channel, a set of precoding and combining weights are derived from the channel matrix. Then each
data stream can be independently recovered. Those weights contain both magnitude and phase terms
and are normally applied in the digital domain. One example of simulating such a system can be
found in the “Improve SNR and Capacity of Wireless Communication Using Antenna Arrays” on page
17-406 example. In the system diagram shown below, each antenna is connected to a unique transmit
and receive (TR) module.
The ever growing demand for high data rate and more user capacity increases the need to use the
spectrum more efficiently. As a result, the next generation, 5G, wireless systems will use millimeter
wave (mmWave) band to take advantage of its wider bandwidth. In addition, 5G systems deploy large
scale antenna arrays to mitigate severe propagation loss in the mmWave band. However, these
configurations bring their unique technical challenges.
Compared to current wireless systems, the wavelength in the mmWave band is much smaller.
Although this allows an array to contain more elements with the same physical dimension, it becomes
much more expensive to provide one TR module for each antenna element. Hence, as a compromise,
a TR switch is often used to supply multiple antenna elements. This is the same concept as the
subarray configuration used in the radar community. One such configuration is shown in the following
figure.
17-423
17 Featured Examples
The figure above shows that on the transmit side, the number of TR switches, NTRF, is smaller than
the number of antenna elements, NT . To provide more flexibility, each antenna element can be
connected to one or more TR modules. In addition, analog phase shifters can be inserted between
each TR module and antenna to provide some limited steering capability.
The configuration on the receiver side is similar, as shown in the figure. The maximum number of
data streams, Ns, that can be supported by this system is the smaller of NTRF and NRRF.
In this configuration, it is no longer possible to apply digital weights on each antenna element.
Instead, the digital weights can only be applied at each RF chain. At the element level, the signal is
adjusted by analog phase shifters, which only changes the phase of the signal. Thus, the precoding or
combining are actually done in two stages. Because this approach performs beamforming in both
digital and analog domains, it is referred to as hybrid beamforming.
System Setup
This section simulates a 64 x 16 MIMO hybrid beamforming system, with a 64-element square array
with 4 RF chains on the transmitter side and a 16-element square array with 4 RF chains on the
receiver side.
Nt = 64;
NtRF = 4;
Nr = 16;
NrRF = 4;
In this simulation, it is assumed that each antenna is connected to all RF chains. Thus, each antenna
is connected to 4 phase shifters. Such an array can be modeled by partitioning the array aperture
into 4 completely connected subarrays.
rng(4096);
c = 3e8;
fc = 28e9;
lambda = c/fc;
txarray = phased.PartitionedArray(...
'Array',phased.URA([sqrt(Nt) sqrt(Nt)],lambda/2),...
'SubarraySelection',ones(NtRF,Nt),'SubarraySteering','Custom');
rxarray = phased.PartitionedArray(...
17-424
Introduction to Hybrid Beamforming
'Array',phased.URA([sqrt(Nr) sqrt(Nr)],lambda/2),...
'SubarraySelection',ones(NrRF,Nr),'SubarraySteering','Custom');
To maximize the spectral efficiency, each RF chain can be used to send an independent data stream.
In this case, the system can support up to 4 streams.
Next, assume a scattering environment with 6 scattering clusters randomly distributed in space.
Within each cluster, there are 8 closely located scatterers with a angle spread of 5 degrees, for a total
of 48 scatterers. The path gain for each scatterer is obtained from a complex circular symmetric
Gaussian distribution.
Ncl = 6;
Nray = 8;
Nscatter = Nray*Ncl;
angspread = 5;
% compute randomly placed scatterer clusters
txclang = [rand(1,Ncl)*120-60;rand(1,Ncl)*60-30];
rxclang = [rand(1,Ncl)*120-60;rand(1,Ncl)*60-30];
txang = zeros(2,Nscatter);
rxang = zeros(2,Nscatter);
% compute the rays within each cluster
for m = 1:Ncl
txang(:,(m-1)*Nray+(1:Nray)) = randn(2,Nray)*sqrt(angspread)+txclang(:,m);
rxang(:,(m-1)*Nray+(1:Nray)) = randn(2,Nray)*sqrt(angspread)+rxclang(:,m);
end
g = (randn(1,Nscatter)+1i*randn(1,Nscatter))/sqrt(Nscatter);
txpos = getElementPosition(txarray)/lambda;
rxpos = getElementPosition(rxarray)/lambda;
H = scatteringchanmtx(txpos,rxpos,txang,rxang,g);
In a spatial multiplexing system with all digital beamforming, the signal is modulated by a set of
precoding weights, propagated through the channel, and recovered by a set of combining weights.
Mathematically, this process can be described by Y = (X*F*H+N)*W where X is an Ns-column matrix
whose columns are data streams, F is an Ns × Nt matrix representing the precoding weights, W is an
Nr × Ns matrix representing the combining weights, N is an Nr-column matrix whose columns are the
receiver noise at each element, and Y is an Ns-column matrix whose columns are recovered data
streams. Since the goal of the system is to achieve better spectral efficiency, obtaining the precoding
and combining weights can be considered as an optimization problem where the optimal precoding
and combining weights make the product of F*H*W' a diagonal matrix so each data stream can be
recovered independently.
In a hybrid beamforming system, the signal flow is similar. Both the precoding weights and the
combining weights are combinations of baseband digital weights and RF band analog weights. The
baseband digital weights convert the incoming data streams to input signals at each RF chain and the
analog weights then convert the signal at each RF chain to the signal radiated or collected at each
antenna element. Note that the analog weights can only contain phase shifts.
Mathematically, it can be written as F=Fbb*Frf and W=Wbb*Wrf, where Fbb is an Ns × NtRF matrix,
Frf an NtRF × Nt matrix, Wbb an NrRF × Ns matrix, and Wrf an Nr × NrRF matrix. Since both Frf
and Wrf can only be used to modify the signal phase, there are extra constraints in the optimization
17-425
17 Featured Examples
process to identify the optimal precoding and combining weights. Ideally, the resulting combination of
Fbb*Frf and Wrf*Wbb are close approximations of F and W that are obtained without those
constraints.
Unfortunately, optimizing all four matrix variables simultaneously is quite difficult. Therefore, many
algorithms are proposed to arrive at suboptimal weights with a reasonable computational load. This
example uses the approach proposed in [1] which decouples the optimizations for the precoding and
combining weights. It first uses the orthogonal matching pursuit algorithm to derive the precoding
weights. Once the precoding weights are computed, the result is then used to obtain the
corresponding combining weights.
Assuming the channel is known, the unconstrained optimal precoding weights can be obtained by
diagonalizing the channel matrix and extracting the first NtRF dominating modes. The transmit beam
pattern can be plotted as.
F = diagbfweights(H);
F = F(1:NtRF,:);
pattern(txarray,fc,-90:90,-90:90,'Type','efield',...
'ElementWeights',F','PropagationSpeed',c);
The response pattern above shows that even in a multipath environment, there are limited number of
dominant directions.
17-426
Introduction to Hybrid Beamforming
Ns = NtRF;
[Fbb,Frf] = omphybweights(H,Ns,NtRF,At);
pattern(txarray,fc,-90:90,-90:90,'Type','efield',...
'ElementWeights',Frf'*Fbb','PropagationSpeed',c);
Compared to the beam pattern obtained using the optimal weights, the beam pattern using the hybrid
weights is similar, especially for dominant beams. This means that the data streams can be
successfully transmitted through those beams using hybrid weights.
One of the system level performance metrics of a 5G system is the spectral efficiency. The next
section compares the spectral efficiency achieved using the optimal weights with that of the proposed
hybrid beamforming weights. The simulation assumes 1 or 2 data streams as outlined in [1]. The
transmit antenna array is assumed to be at a base station, with a focused beamwidth of 60 degrees in
azimuth and 20 degrees in elevation. The signal can arrive at the receive array from any direction.
The resulting spectral efficiency curve is obtained from 50 Monte-Carlo trials for each SNR.
snr_param = -40:5:0;
Nsnr = numel(snr_param);
Ns_param = [1 2];
NNs = numel(Ns_param);
17-427
17 Featured Examples
NtRF = 4;
NrRF = 4;
Ropt = zeros(Nsnr,NNs);
Rhyb = zeros(Nsnr,NNs);
Niter = 50;
for m = 1:Nsnr
snr = db2pow(snr_param(m));
for n = 1:Niter
% Channel realization
txang = [rand(1,Nscatter)*60-30;rand(1,Nscatter)*20-10];
rxang = [rand(1,Nscatter)*180-90;rand(1,Nscatter)*90-45];
At = steervec(txpos,txang);
Ar = steervec(rxpos,rxang);
g = (randn(1,Nscatter)+1i*randn(1,Nscatter))/sqrt(Nscatter);
H = scatteringchanmtx(txpos,rxpos,txang,rxang,g);
for k = 1:NNs
Ns = Ns_param(k);
% Compute optimal weights and its spectral efficiency
[Fopt,Wopt] = helperOptimalHybridWeights(H,Ns,1/snr);
Ropt(m,k) = Ropt(m,k)+helperComputeSpectralEfficiency(H,Fopt,Wopt,Ns,snr);
plot(snr_param,Ropt(:,1),'--sr',...
snr_param,Ropt(:,2),'--b',...
snr_param,Rhyb(:,1),'-sr',...
snr_param,Rhyb(:,2),'-b');
xlabel('SNR (dB)');
ylabel('Spectral Efficiency (bits/s/Hz');
legend('Ns=1 optimal','Ns=2 optimal','Ns=1 hybrid', 'Ns=2 hybrid',...
'Location','best');
grid on;
17-428
Introduction to Hybrid Beamforming
This figure shows that the spectral efficiency improves significantly when we increase the number of
data streams. In addition, the hybrid beamforming can perform close to what optimal weights can
offer using less hardware.
Summary
This example introduces the basic concept of hybrid beamforming and shows how to split the
precoding and combining weights using orthogonal matching pursuit algorithm. It shows that hybrid
beamforming can closely match the performance offered by optimal digital weights.
References
[1] Omar El Ayach, et al. Spatially Sparse Precoding in Millimeter wave MIMO Systems, IEEE
Transactions on Wireless Communications, Vol. 13, No. 3, March 2014.
17-429
17 Featured Examples
The example uses functions and System objects™ from Communications Toolbox and Phased Array
System Toolbox and requires
Introduction
MIMO-OFDM systems are the norm in current wireless systems (e.g. 5G NR, LTE, WLAN) due to their
robustness to frequency-selective channels and high data rates enabled. With ever-increasing
demands on data rates supported, these systems are getting more complex and larger in
configurations with increasing number of antenna elements, and resources (subcarriers) allocated.
With antenna arrays and spatial multiplexing, efficient techniques to realize the transmissions are
necessary [ 6 ]. Beamforming is one such technique, that is employed to improve the signal to noise
ratio (SNR) which ultimately improves the system performance, as measured here in terms of bit
error rate (BER) [ 1 ].
This example illustrates an asymmetric MIMO-OFDM single-user system where the maximum number
of antenna elements on transmit and receive ends can be 1024 and 32 respectively, with up to 16
independent data streams. It models a spatial channel where the array locations and antenna
patterns are incorporated into the overall system design. For simplicity, a single point-to-point link
(one base station communicating with one mobile user) is modeled. The link uses channel sounding to
provide the transmitter with the channel information it needs for beamforming.
The example offers the choice of a few spatially defined channel models, specifically a WINNER II
Channel model and a scattering-based model, both of which account for the transmit/receive spatial
locations and antenna patterns.
System Parameters
Define parameters for the system. These parameters can be modified to explore their impact on the
system.
17-430
MIMO-OFDM Precoding with Phased Arrays
% 'ScatteringFcn', 'StaticFlat'
prm.NFig = 5; % Noise figure, dB
Parameters to define the OFDM modulation employed for the system are specified below.
prm.FFTLength = 256;
prm.CyclicPrefixLength = 64;
prm.numCarriers = 234;
prm.NumGuardBandCarriers = [7 6];
prm.PilotCarrierIndices = [26 54 90 118 140 168 204 232];
nonDataIdx = [(1:prm.NumGuardBandCarriers(1))'; prm.FFTLength/2+1; ...
(prm.FFTLength-prm.NumGuardBandCarriers(2)+1:prm.FFTLength)'; ...
prm.PilotCarrierIndices.';];
prm.CarriersLocations = setdiff((1:prm.FFTLength)',sort(nonDataIdx));
numTx = prm.numTx;
numRx = prm.numRx;
numSTS = prm.numSTS;
prm.numFrmBits = numSTS*prm.numDataSymbols*prm.numCarriers* ...
prm.bitsPerSubCarrier*1/3-6; % Account for termination bits
The processing for channel sounding, data transmission and reception modeled in the example are
shown in the following block diagrams.
The free space path loss is calculated based on the base station and mobile station positions for the
spatially-aware system modeled.
prm.cLight = physconst('LightSpeed');
prm.lambda = prm.cLight/prm.fc;
% Mobile position
17-431
17 Featured Examples
[xRx,yRx,zRx] = sph2cart(deg2rad(prm.mobileAngle(1)),...
deg2rad(prm.mobileAngle(2)),prm.mobileRange);
prm.posRx = [xRx;yRx;zRx];
[toRxRange,toRxAng] = rangeangle(prm.posTx,prm.posRx);
spLoss = fspl(toRxRange,prm.lambda);
gainFactor = 1;
Channel Sounding
For a spatially multiplexed system, availability of channel information at the transmitter allows for
precoding to be applied to maximize the signal energy in the direction and channel of interest. Under
the assumption of a slowly varying channel, this is facilitated by sounding the channel first, wherein
for a reference transmission, the receiver estimates the channel and feeds this information back to
the transmitter.
For the chosen system, a preamble signal is sent over all transmitting antenna elements, and
processed at the receiver accounting for the channel. The receiver components perform pre-
amplification, OFDM demodulation, frequency domain channel estimation, and calculation of the
feedback weights based on channel diagonalization using singular value decomposition (SVD) per
data subcarrier.
% OFDM Demodulation
demodulatorOFDM = comm.OFDMDemodulator( ...
'FFTLength',prm.FFTLength, ...
'NumGuardBandCarriers',prm.NumGuardBandCarriers.', ...
'RemoveDCCarrier',true, ...
'PilotOutputPort',true, ...
'PilotCarrierIndices',prm.PilotCarrierIndices.', ...
'CyclicPrefixLength',prm.CyclicPrefixLength, ...
'NumSymbols',numSTS, ... % preamble symbols alone
'NumReceiveAntennas',numRx);
17-432
MIMO-OFDM Precoding with Phased Arrays
For conciseness in presentation, front-end synchronization including carrier and timing recovery are
assumed. The weights computed using diagbfweights are hence fed back to the transmitter, for
subsequent application for the actual data transmission.
Data Transmission
Next, we configure the system's data transmitter. This processing includes channel coding, bit
mapping to complex symbols, splitting of the individual data stream to multiple transmit streams,
precoding of the transmit streams, OFDM modulation with pilot mapping and replication for the
transmit antennas employed.
% Convolutional encoder
encoder = comm.ConvolutionalEncoder( ...
'TrellisStructure',poly2trellis(7,[133 171 165]), ...
'TerminationMethod','Terminated');
% Multi-antenna pilots
pilots = helperGenPilots(prm.numDataSymbols,numSTS);
17-433
17 Featured Examples
txOFDM = modulatorOFDM(preData,pilots);
txOFDM = txOFDM * (prm.FFTLength/ ...
sqrt(prm.FFTLength-sum(prm.NumGuardBandCarriers)-1)); % scale power
For precoding, the preamble signal is regenerated to enable channel estimation. It is prepended to
the data portion to form the transmission packet which is then replicated over the transmit antennas.
Phased Array System Toolbox offers components appropriate for the design and simulation of phased
arrays used in wireless communications systems.
For the spatially aware system, the signal transmitted from the base station is steered towards the
direction of the mobile, so as to focus the radiated energy in the desired direction. This is achieved by
applying a phase shift to each antenna element to steer the transmission.
The example uses a linear or rectangular array at the transmitter, depending on the number of data
streams and number of transmit antennas selected.
% Gain per antenna element
amplifier = phased.Transmitter('PeakPower',1/numTx,'Gain',0);
17-434
MIMO-OFDM Precoding with Phased Arrays
if prm.enSteering
txSteerSig = radiatorTx(txSig,repmat(prm.mobileAngle,1,numTx), ...
conj(wT));
else
txSteerSig = txSig;
end
17-435
17 Featured Examples
17-436
MIMO-OFDM Precoding with Phased Arrays
The plots indicate the array geometry and the transmit array response in multiple views. The
response shows the transmission direction as specified by the steering angle.
17-437
17 Featured Examples
The example assumes the steering angle known and close to the mobile angle. In actual systems, this
would be estimated from angle-of-arrival estimation at the receiver as a part of the channel sounding
or initial beam tracking procedures.
Signal Propagation
The example offers three options for spatial MIMO channels and a simpler static-flat MIMO channel
for evaluation purposes.
The WINNER II channel model [ 5 ] is a spatially defined MIMO channel that allows you to specify the
array geometry and location information. It is configured to use the typical urban microcell indoor
scenario with very low mobile speeds.
The two scattering based channels use a single-bounce path through each scatterer where the
number of scatterers is user-specified. For this example, the number of scatterers is set to 100. The
'Scattering' option models the scatterers placed randomly within a circle in between the transmitter
and receiver, while the 'ScatteringFcn' models their placement completely randomly.
The models allow path loss modeling and both line-of-sight (LOS) and non-LOS propagation
conditions. The example assumes non-LOS propagation and isotropic antenna element patterns with
linear geometry.
% Apply a spatially defined channel to the steered signal
[rxSig,chanDelay] = helperApplyChannel(txSteerSig,prm,spLoss,preambleSig);
The same channel is used for both sounding and data transmission, with the data transmission having
a longer duration controlled by the number of data symbols parameter, prm.numDataSymbols.
The receiver steers the incident signals to align with the transmit end steering, per receive element.
Thermal noise and receiver gain are applied. Uniform linear or rectangular arrays with isotropic
responses are modeled to match the channel and transmitter arrays.
rxPreAmp = phased.ReceiverPreamp( ...
'Gain',gainFactor*spLoss, ... % accounts for path loss
'NoiseFigure',prm.NFig, ...
'ReferenceTemperature',290, ...
'SampleRate',prm.chanSRate);
% Receive array
if isRxURA
% Uniform Rectangular array
arrayRx = phased.URA([expFactorRx,numSTS],0.5*prm.lambda, ...
'Element',phased.IsotropicAntennaElement('BackBaffled',true));
else
% Uniform Linear array
arrayRx = phased.ULA(numRx, ...
'ElementSpacing',0.5*prm.lambda, ...
'Element',phased.IsotropicAntennaElement);
end
17-438
MIMO-OFDM Precoding with Phased Arrays
17-439
17 Featured Examples
17-440
MIMO-OFDM Precoding with Phased Arrays
Signal Recovery
The receive antenna array passes the propagated signal to the receiver to recover the original
information embedded in the signal. Similar to the transmitter, the receiver used in a MIMO-OFDM
system contains many components, including OFDM demodulator, MIMO equalizer, QAM
demodulator, and channel decoder.
demodulatorOFDM = comm.OFDMDemodulator( ...
'FFTLength',prm.FFTLength, ...
'NumGuardBandCarriers',prm.NumGuardBandCarriers.', ...
'RemoveDCCarrier',true, ...
'PilotOutputPort',true, ...
'PilotCarrierIndices',prm.PilotCarrierIndices.', ...
'CyclicPrefixLength',prm.CyclicPrefixLength, ...
'NumSymbols',numSTS+prm.numDataSymbols, ... % preamble & data
'NumReceiveAntennas',numRx);
% OFDM Demodulation
rxOFDM = demodulatorOFDM( ...
rxSteerSig(chanDelay+1:end-(prm.numPadZeros-chanDelay),:));
% MIMO Equalization
[rxEq,CSI] = helperMIMOEqualize(rxOFDM(:,numSTS+1:end,:),hD);
% Soft demodulation
scFact = ((prm.FFTLength-sum(prm.NumGuardBandCarriers)-1) ...
/prm.FFTLength^2)/numTx;
17-441
17 Featured Examples
nVar = noisepow(prm.chanSRate,prm.NFig,290)/scFact;
rxSymbs = rxEq(:)/sqrt(numTx);
rxLLRBits = qamdemod(rxSymbs,prm.modMode,'UnitAveragePower',true, ...
'OutputType','approxllr','NoiseVariance',nVar);
For the MIMO system modeled, the displayed receive constellation of the equalized symbols offers a
qualitative assessment of the reception. The actual bit error rate offers the quantitative figure by
comparing the actual transmitted bits with the received decoded bits.
17-442
MIMO-OFDM Precoding with Phased Arrays
The example highlighted the use of phased antenna arrays for a beamformed MIMO-OFDM system. It
accounted for the spatial geometry and location of the arrays at the base station and mobile station
for a single user system. Using channel sounding, it illustrated how precoding is realized in current
wireless systems and how steering of antenna arrays is modeled.
Within the set of configurable parameters, you can vary the number of data streams, transmit/receive
antenna elements, station or array locations and geometry, channel models and their configurations
to study the parameters' individual or combined effects on the system. E.g. vary just the number of
transmit antennas to see the effect on the main lobe of the steered beam and the resulting system
performance.
The example also made simplifying assumptions for front-end synchronization, channel feedback,
user velocity and path loss models, which need to be further considered for a practical system.
Individual systems also have their own procedures which must be folded in to the modeling [ 2, 3, 4 ].
• helperApplyChannel.m
• helperArrayInfo.m
17-443
17 Featured Examples
• helperGenPilots.m
• helperGenPreamble.m
• helperGetP.m
• helperMIMOChannelEstimate.m
• helperMIMOEqualize.m
Selected Bibliography
1 Perahia, Eldad, and Robert Stacey. Next Generation Wireless LANS: 802.11n and 802.11ac.
Cambridge University Press, 2013.
2 IEEE® Std 802.11™-2012 IEEE Standard for Information technology - Telecommunications and
information exchange between systems - Local and metropolitan area networks - Specific
requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications.
3 3GPP TS 36.213. "Physical layer procedures." 3rd Generation Partnership Project; Technical
Specification Group Radio Access Network; Evolved Universal Terrestrial Radio Access (E-
UTRA). URL: https://www.3gpp.org.
4 3GPP TS 36.101. "User Equipment (UE) Radio Transmission and Reception." 3rd Generation
Partnership Project; Technical Specification Group Radio Access Network; Evolved Universal
Terrestrial Radio Access (E-UTRA). URL: https://www.3gpp.org.
5 Kyosti, Pekka, Juha Meinila, et al. WINNER II Channel Models. D1.1.2, V1.2. IST-4-027756
WINNER II, September 2007.
6 George Tsoulos, Ed., "MIMO System Technology for Wireless Communications", CRC Press, Boca
Raton, FL, 2006.
17-444
Acceleration of Clutter Simulation Using GPU and Code Generation
The full functionality of this example requires Parallel Computing Toolbox™ and MATLAB Coder™.
Clutter Simulation
Radar system engineers often need to simulate the clutter return to test signal processing algorithms,
such as STAP algorithms. However, generating a high fidelity clutter return involves many steps and
therefore is often computationally expensive. For example, phased.ConstantGammaClutter simulates
the clutter using the following steps:
1 Divide the entire terrain into small clutter patches. The size of the patch depends on the azimuth
patch width and the range resolution.
2 For each patch, calculate its corresponding parameters, such as the random return, the grazing
angle, and the antenna array gain.
3 Combine returns from all clutter patches to generate the total clutter return.
The number of clutter patches depends on the terrain coverage, but it is usually in the range of
thousands to millions. In addition, all steps above need to be performed for each pulse (assuming a
pulsed radar is used). Therefore, clutter simulation is often the tall pole in a system simulation.
To improve the speed of the clutter simulation, one can take advantage of parallel computing. Note
that the clutter return from later pulses could depend on the signal generated in earlier pulses, so
certain parallel solutions offered by MATLAB, such as parfor, are not always applicable. However,
because the computation at each patch is independent of the computations at other patches, it is
suitable for GPU acceleration.
If you have a supported GPU and have access to Parallel Computing Toolbox, then you can take
advantage of the GPU in generating the clutter return by using
phased.gpu.ConstantGammaClutter instead of phased.ConstantGammaClutter. In most
cases, using a different System object is the only change you need to make to your existing program,
as shown in the following figure.
If you have access to MATLAB Coder, you can also speed up clutter simulation by generating C code
for phased.ConstantGammaClutter, compiling it and running the compiled version. When running
in code generation mode, this example compiles stapclutter using the codegen command:
17-445
17 Featured Examples
codegen('stapclutter','-args',...
{coder.Constant(maxRange),...
coder.Constant(patchAzWidth)});
To compare the clutter simulation performance between the MATLAB interpreter, code generation
and a GPU, launch the following GUI by typing stapcpugpu in the MATLAB command line. The
launched GUI is shown in the following figure:
The left side of the GUI contains four plots, showing the raw received signal, the angle-Doppler
response of the received signal, the processed signal, and the angle-Doppler response of the STAP
processing weights. Again, the details can be found in the example “Introduction to Space-Time
Adaptive Processing” on page 17-231. On the right side of the GUI, you control the number of clutter
patches by modifying the clutter patch width in the azimuth direction (in degrees) and maximum
17-446
Acceleration of Clutter Simulation Using GPU and Code Generation
clutter range (in km). You can then click the Start button to start the simulation, which simulates 5
coherent processing intervals (CPI) where each CPI contains 10 pulses. The processed signal and the
angle-Doppler responses are updated once every CPI.
Next section shows timing for different simulation runs. In these simulations, each pulse consists of
200 range samples with a range resolution of 50 m. Combinations of the clutter patch width and the
maximum clutter range result in various number of total clutter patches. For example, a clutter patch
width of 10 degrees and a maximum clutter range of 5 km implies 3600 clutter patches. The
simulations are carried out on the following system configurations:
From the figure, you can see that in general the GPU improves the simulation speed by dozens of
times, sometimes even hundred of times. Two interesting observations are:
• When the number of clutter patches are small, as long as the data can be fit into the GPU memory,
the GPU's performance is almost constant. The same is not true for the MATLAB interpreter.
• Once the number of clutter patches gets large, the data can no longer be fit into the GPU memory.
Therefore, the speed up provided by GPU over the MATLAB interpreter starts to decrease.
However, for close to ten millions of clutter patches, the GPU still provides an acceleration of over
50 times.
17-447
17 Featured Examples
Simulation speed improvement due to code generation is less than the GPU speed improvement, but
is still significant. Code generation for the phased.ConstantGammaClutter pre-calculates the
collected clutter as an array of constant values. For larger number of clutter patches the size of the
array becomes too big, thus reducing the speed improvement due to the overhead of memory
management. Code generation requires access to MATLAB Coder but requires no special hardware.
Even though the simulation used in this example calculates millions of clutter patches, the resulting
data cube has a size of 200x6x10, indicating only 200 range samples within each pulse, 6 channels,
and 10 pulses. This data cube is small compared to real problems. This example chooses these
parameters to show the benefit you can get from using a GPU or code generation while ensuring that
the example runs within a reasonable time in the MATLAB interpreter. Some simulations with larger
data cube size yield the following results:
• 45-fold acceleration using a GPU for a simulation that generates 50 pulses for a 50-element ULA
with 5000 range samples in each pulse, i.e., a 5000x50x50 data cube. The range resolution is 10
m. The radar covers a total azimuth of 60 degrees, with 1 degree in each clutter patch. The
maximum clutter range is 50 km. The total number of clutter patches is 305,000.
• 60-fold acceleration using a GPU for a simulation like the one above, except with 180-degree
azimuth coverage and a maximum clutter range equal to the horizon range (about 130 km). In this
case, the total number of clutter patches is 2,356,801.
Summary
This example compares the performance achieved by simulating clutter return using either the
MATLAB interpreter, a GPU or code generation. The result indicates that the GPU and code
generation offer big speed improvements over the MATLAB interpreter.
17-448
Designing a Basic Monostatic Pulse Radar
This example focuses on a pulse radar system design which can achieve a set of design specifications.
It outlines the steps to translate design specifications, such as the probability of detection and the
range resolution, into radar system parameters, such as the transmit power and the pulse width. It
also models the environment and targets to synthesize the received signal. Finally, signal processing
techniques are applied to the received signal to detect the ranges of the targets.
Design Specifications
The design goal of this pulse radar system is to detect non-fluctuating targets with at least one square
meter radar cross section (RCS) at a distance up to 5000 meters from the radar with a range
resolution of 50 meters. The desired performance index is a probability of detection (Pd) of 0.9 and
probability of false alarm (Pfa) below 1e-6. Since coherent detection requires phase information and,
therefore is more computationally expensive, we adopt a noncoherent detection scheme. In addition,
this example assumes a free space environment.
We need to define several characteristics of the radar system such as the waveform, the receiver, the
transmitter, and the antenna used to radiate and collect the signal.
Waveform
We choose a rectangular waveform in this example. The desired range resolution determines the
bandwidth of the waveform, which, in the case of a rectangular waveform, determines the pulse
width.
Another important parameter of a pulse waveform is the pulse repetition frequency (PRF). The PRF is
determined by the maximum unambiguous range.
17-449
17 Featured Examples
We assume that the only noise present at the receiver is the thermal noise, so there is no clutter
involved in this simulation. The power of the thermal noise is related to the receiver bandwidth. The
receiver's noise bandwidth is set to be the same as the bandwidth of the waveform. This is often the
case in real systems. We also assume that the receiver has a 20 dB gain and a 0 dB noise figure.
noise_bw = pulse_bw;
receiver = phased.ReceiverPreamp(...
'Gain',20,...
'NoiseFigure',0,...
'SampleRate',fs,...
'EnableInputPort',true);
Note that because we are modeling a monostatic radar, the receiver cannot be turned on until the
transmitter is off. Therefore, we set the EnableInputPort property to true so that a synchronization
signal can be passed from the transmitter to the receiver.
Transmitter
The most critical parameter of a transmitter is the peak transmit power. The required peak power is
related to many factors including the maximum unambiguous range, the required SNR at the
receiver, and the pulse width of the waveform. Among these factors, the required SNR at the receiver
is determined by the design goal of Pd and Pfa, as well as the detection scheme implemented at the
receiver.
The relation between Pd, Pfa and SNR can be best represented by a receiver operating
characteristics (ROC) curve. We can generate the curve where Pd is a function of Pfa for varying
SNRs using the following command
17-450
Designing a Basic Monostatic Pulse Radar
The ROC curves show that to satisfy the design goals of Pfa = 1e-6 and Pd = 0.9, the received signal's
SNR must exceed 13 dB. This is a fairly high requirement and is not very practical. To make the radar
system more feasible, we can use a pulse integration technique to reduce the required SNR. If we
choose to integrate 10 pulses, the curve can be generated as
num_pulse_int = 10;
rocsnr([0 3 5],'SignalType','NonfluctuatingNoncoherent',...
'NumPulses',num_pulse_int);
17-451
17 Featured Examples
We can see that the required power has dropped to around 5 dB. Further reduction of SNR can be
achieved by integrating more pulses, but the number of pulses available for integration is normally
limited due to the motion of the target or the heterogeneity of the environment.
The approach above reads out the SNR value from the curve, but it is often desirable to calculate only
the required value. For the noncoherent detection scheme, the calculation of the required SNR is, in
theory, quite complex. Fortunately, there are good approximations available, such as Albersheim's
equation. Using Albersheim's equation, the required SNR can be derived as
snr_min = albersheim(pd, pfa, num_pulse_int)
snr_min =
4.9904
Once we obtain the required SNR at the receiver, the peak power at the transmitter can be calculated
using the radar equation. Here we assume that the transmitter has a gain of 20 dB.
To calculate the peak power using the radar equation, we also need to know the wavelength of the
propagating signal, which is related to the operating frequency of the system. Here we set the
operating frequency to 10 GHz.
tx_gain = 20;
fc = 10e9;
17-452
Designing a Basic Monostatic Pulse Radar
lambda = prop_speed/fc;
peak_power = radareqpow(lambda,max_range,snr_min,pulse_width,...
'RCS',tgt_rcs,'Gain',tx_gain)
peak_power =
5.2265e+03
Note that the resulting power is about 5 kW, which is very reasonable. In comparison, if we had not
used the pulse integration technique, the resulting peak power would have been 33 kW, which is
huge.
Again, since this example models a monostatic radar system, the InUseOutputPort is set to true to
output the status of the transmitter. This status signal can then be used to enable the receiver.
In a radar system, the signal propagates in the form of an electromagnetic wave. Therefore, the
signal needs to be radiated and collected by the antenna used in the radar system. This is where the
radiator and the collector come into the picture.
In a monostatic radar system, the radiator and the collector share the same antenna, so we will first
define the antenna. To simplify the design, we choose an isotropic antenna. Note that the antenna
needs to be able to work at the operating frequency of the system (10 GHz), so we set the antenna's
frequency range to 5-15 GHz.
sensormotion = phased.Platform(...
'InitialPosition',[0; 0; 0],...
'Velocity',[0; 0; 0]);
With the antenna and the operating frequency, we define both the radiator and the collector.
radiator = phased.Radiator(...
'Sensor',antenna,...
'OperatingFrequency',fc);
collector = phased.Collector(...
'Sensor',antenna,...
'OperatingFrequency',fc);
This completes the configuration of the radar system. In the following sections, we will define other
entities, such as the target and the environment that are needed for the simulation. We will then
simulate the signal return and perform range detection on the simulated signal.
17-453
17 Featured Examples
System Simulation
Targets
To test our radar's ability to detect targets, we must define the targets first. Let us assume that there
are 3 stationary, non-fluctuating targets in space. Their positions and radar cross sections are given
below.
tgtpos = [[2024.66;0;0],[3518.63;0;0],[3845.04;0;0]];
tgtvel = [[0;0;0],[0;0;0],[0;0;0]];
tgtmotion = phased.Platform('InitialPosition',tgtpos,'Velocity',tgtvel);
Propagation Environment
To simulate the signal, we also need to define the propagation channel between the radar system and
each target.
channel = phased.FreeSpace(...
'SampleRate',fs,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
Because this example uses a monostatic radar system, the channels are set to simulate two way
propagation delays.
Signal Synthesis
The synthesized signal is a data matrix with the fast time (time within each pulse) along each column
and the slow time (time between pulses) along each row. To visualize the signal, it is helpful to define
both the fast time grid and slow time grid.
fast_time_grid = unigrid(0,1/fs,1/prf,'[)');
slow_time_grid = (0:num_pulse_int-1)/prf;
We set the seed for the noise generation in the receiver so that we can reproduce the same results.
receiver.SeedSource = 'Property';
receiver.Seed = 2007;
for m = 1:num_pulse_int
17-454
Designing a Basic Monostatic Pulse Radar
Range Detection
Detection Threshold
The detector compares the signal power to a given threshold. In radar applications, the threshold is
often chosen so that the Pfa is below a certain level. In this case, we assume the noise is white
Gaussian and the detection is noncoherent. Since we are also using 10 pulses to do the pulse
integration, the signal power threshold is given by
npower = noisepow(noise_bw,receiver.NoiseFigure,...
receiver.ReferenceTemperature);
threshold = npower * db2pow(npwgnthresh(pfa,num_pulse_int,'noncoherent'));
num_pulse_plot = 2;
helperRadarPulsePlot(rxpulses,threshold,...
fast_time_grid,slow_time_grid,num_pulse_plot);
17-455
17 Featured Examples
The threshold in these figures is for display purpose only. Note that the second and third target
returns are much weaker than the first return because they are farther away from the radar.
Therefore, the received signal power is range dependent and the threshold is unfair to targets located
at different ranges.
Matched Filter
The matched filter offers a processing gain which improves the detection threshold. It convolves the
received signal with a local, time-reversed, and conjugated copy of transmitted waveform. Therefore,
we must specify the transmitted waveform when creating our matched filter. The received pulses are
first passed through a matched filter to improve the SNR before doing pulse integration, threshold
detection, etc.
matchingcoeff = getMatchedFilter(waveform);
matchedfilter = phased.MatchedFilter(...
'Coefficients',matchingcoeff,...
'GainOutputPort',true);
[rxpulses, mfgain] = matchedfilter(rxpulses);
The matched filter introduces an intrinsic filter delay so that the locations of the peak (the maximum
SNR output sample) are no longer aligned with the true target locations. To compensate for this
delay, in this example, we will move the output of the matched filter forward and pad the zeros at the
end. Note that in real systems, because the data is collected continuously, there is really no end of it.
matchingdelay = size(matchingcoeff,1)-1;
rxpulses = buffer(rxpulses(matchingdelay+1:end),size(rxpulses,1));
17-456
Designing a Basic Monostatic Pulse Radar
The following plot shows the same two pulses after they pass through the matched filter.
helperRadarPulsePlot(rxpulses,threshold,...
fast_time_grid,slow_time_grid,num_pulse_plot);
After the matched filter stage, the SNR is improved. However, because the received signal power is
dependent on the range, the return of a close target is still much stronger than the return of a target
farther away. Therefore, as the above figure shows, the noise from a close range bin also has a
significant chance of surpassing the threshold and shadowing a target farther away. To ensure the
threshold is fair to all the targets within the detectable range, we can use a time varying gain to
compensate for the range dependent loss in the received echo.
To compensate for the range dependent loss, we first calculate the range gates corresponding to each
signal sample and then calculate the free space path loss corresponding to each range gate. Once
that information is obtained, we apply a time varying gain to the received pulse so that the returns
are as if from the same reference range (the maximum detectable range).
range_gates = prop_speed*fast_time_grid/2;
tvg = phased.TimeVaryingGain(...
'RangeLoss',2*fspl(range_gates,lambda),...
'ReferenceLoss',2*fspl(max_range,lambda));
17-457
17 Featured Examples
rxpulses = tvg(rxpulses);
Now let's plot the same two pulses after the range normalization
helperRadarPulsePlot(rxpulses,threshold,...
fast_time_grid,slow_time_grid,num_pulse_plot);
The time varying gain operation results in a ramp in the noise floor. However, the target return is now
range independent. A constant threshold can now be used for detection across the entire detectable
range.
Notice that at this stage, the threshold is above the maximum power level contained in each pulse.
Therefore, nothing can be detected at this stage yet. We need to perform pulse integration to ensure
the power of returned echoes from the targets can surpass the threshold while leaving the noise floor
below the bar. This is expected since it is the pulse integration which allows us to use the lower
power pulse train.
Noncoherent Integration
We can further improve the SNR by noncoherently integrating (video integration) the received pulses.
rxpulses = pulsint(rxpulses,'noncoherent');
helperRadarPulsePlot(rxpulses,threshold,...
fast_time_grid,slow_time_grid,1);
17-458
Designing a Basic Monostatic Pulse Radar
After the video integration stage, the data is ready for the final detection stage. It can be seen from
the figure that all three echoes from the targets are above the threshold, and therefore can be
detected.
Range Detection
Finally, the threshold detection is performed on the integrated pulses. The detection scheme
identifies the peaks and then translates their positions into the ranges of the targets.
[~,range_detect] = findpeaks(rxpulses,'MinPeakHeight',sqrt(threshold));
The true ranges and the detected ranges of the targets are shown below:
true_range = round(tgtrng)
range_estimates = round(range_gates(range_detect))
true_range =
range_estimates =
17-459
17 Featured Examples
Note that these range estimates are only accurate up to the range resolution (50 m) that can be
achieved by the radar system.
Summary
In this example, we designed a radar system based on a set of given performance goals. From these
performance goals, many design parameters of the radar system were calculated. The example also
showed how to use the designed radar to perform a range detection task. In this example, the radar
used a rectangular waveform. Interested readers can refer to “Waveform Design to Improve
Performance of an Existing Radar System” on page 17-165 for an example using a chirp waveform.
17-460
Ground Clutter Mitigation with Moving Target Indication (MTI) Radar
A typical MTI radar uses a high pass filter to remove energy at low Doppler frequencies. Since the
frequency response of an FIR high pass filter is periodic, some energy at high Doppler frequencies is
also removed. Targets at those high Doppler frequencies thus will not be detectable by the radar. This
issue is called the blind speed problem. This example shows how to use a technique, called staggered
PRFs, to address the blind speed problem.
First, define the components of a radar system. The focus of this example is on MTI processing, so
we'll use the radar system built in the example “Designing a Basic Monostatic Pulse Radar” on page
17-449. Readers are encouraged to explore the details of the radar system design through that
example. Change the antenna's height to 100 meters to simulate a radar mounted on top of a
building. Notice that the PRF in the system is approximately 30 kHz, which corresponds to a
maximum unambiguous range of 5 km.
load BasicMonostaticRadarExampleData;
sensorheight = 100;
sensormotion.InitialPosition = [0 0 sensorheight]';
prf = waveform.PRF;
Retrieve the sampling frequency, the operating frequency, and the propagation speed.
fs = waveform.SampleRate;
fc = radiator.OperatingFrequency;
wavespeed = radiator.PropagationSpeed;
In many MTI systems, especially low-end ones, the transmitter's power source is a magnetron. Thus,
the transmitter adds a random phase to each transmitted pulse. Hence, it is often necessary to
restore the coherence at the receiver. Such setup is referred to as coherent on receive. In these
systems, the receiver locks onto the random phased added by transmitter for each pulse. Then, the
receiver removes the phase impact from the received samples received within the corresponding
pulse interval. We can simulate a coherent on receive system by setting the transmitter and receiver
as following:
transmitter.CoherentOnTransmit = false;
transmitter.PhaseNoiseOutputPort = true;
receiver.PhaseNoiseInputPort = true;
Define Targets
The first target is located at position [1600 0 1300]. Given the radar position shown in the preceding
sections, it has a range of 2 km from the radar. The velocity of the target is [100 80 0], corresponding
to a radial speed of -80 m/s relative to the radar. The target has a radar cross section of 25 square
meters.
17-461
17 Featured Examples
The second target is located at position [2900 0 800], corresponding to a range of 3 km from the
radar. Set the speed of this target to a blind speed, where the target's Doppler signature is aliased to
the pulse repetition frequency. This setting prevents the MTI radar from detecting the target. We use
the dop2speed() function to calculate a blind speed which has a corresponding Doppler frequency
equal to the pulse repetition frequency.
wavelength = wavespeed/fc;
blindspd = dop2speed(prf,wavelength)/2; % half to compensate round trip
Clutter
The clutter signal was generated using the simplest clutter model, the constant gamma model, with
the gamma value set to -20 dB. Such a gamma value is typical for flatland clutter. Assume that the
clutter patches exist at all ranges, and that each patch has an azimuth width of 10 degrees. Also
assume that the main beam of the radar points horizontally. Note that the radar is not moving.
trgamma = surfacegamma('flatland');
clutter = phased.ConstantGammaClutter('Sensor',antenna,...
'PropagationSpeed',radiator.PropagationSpeed,...
'OperatingFrequency',radiator.OperatingFrequency,...
'SampleRate',waveform.SampleRate,'TransmitSignalInputPort',true,...
'PRF',waveform.PRF,'Gamma',trgamma,'PlatformHeight',sensorheight,...
'PlatformSpeed',0,'PlatformDirection',[0;0],...
'BroadsideDepressionAngle',0,'MaximumRange',5000,...
'AzimuthCoverage',360,'PatchAzimuthWidth',10,...
'SeedSource','Property','Seed',2011);
Now we simulate 10 received pulses for the radar and targets defined earlier.
pulsenum = 10;
rxPulse = helperMTISimulate(waveform,transmitter,receiver,...
radiator,collector,sensormotion,...
target,tgtmotion,clutter,pulsenum);
matchingcoeff = getMatchedFilter(waveform);
matchedfilter = phased.MatchedFilter('Coefficients',matchingcoeff);
mfiltOut = matchedfilter(rxPulse);
matchingdelay = size(matchingcoeff,1)-1;
mfiltOut = buffer(mfiltOut(matchingdelay+1:end),size(mfiltOut,1));
17-462
Ground Clutter Mitigation with Moving Target Indication (MTI) Radar
MTI processing uses MTI filters to remove low frequency components in slow time sequences.
Because land clutter usually is not moving, removing low frequency components can effectively
suppress it. The three-pulse canceller is a popular and simple MTI filter. The canceller is an all-zero
FIR filter with filter coefficients [1 -2 1].
h = [1 -2 1];
mtiseq = filter(h,1,mfiltOut,[],2);
Use noncoherent pulse integration to combine the slow time sequences. Exclude the first two pulses
because they are in the transient period of the MTI filter.
mtiseq = pulsint(mtiseq(:,3:end));
% For comparison, also integrate the matched filter output
mfiltOut = pulsint(mfiltOut(:,3:end));
17-463
17 Featured Examples
Recall that there are two targets (at 2 km and 3 km). In the case before MTI filtering, both targets are
buried in clutter returns. The peak at 100m is the direct path return from the ground right below the
radar. Notice that the power is decreasing as the range increases, which is due to the signal
propagation loss.
After MTI filtering, most clutter returns are removed except for the direct path peak. The noise floor
is now no longer a function of range, so the noise is now receiver noise rather than clutter noise. This
change shows the clutter suppression capability of the three-pulse canceller. At the 2 km range, we
see a peak representing the first target. However, there is no peak at 3 km range to represent the
second target. The peak disappears because the three-pulse canceller suppresses the second target
which travels at the canceller's blind speed.
To better understand the blind speed problem, let us look at the frequency response of the three-
pulse canceller.
f = linspace(0,prf*9,1000);
hresp = freqz(h,1,f,prf);
plot(f/1000,20*log10(abs(hresp)));
grid on; xlabel('Doppler Frequency (kHz)'); ylabel('Magnitude (dB)');
title('Frequency Response of the Three-Pulse Canceller');
Notice the recurring nulls in the frequency response. The nulls correspond to the Doppler
frequencies of the blind speeds. Targets with these Doppler frequencies are cancelled by the three-
pulse canceller. The plot shows that the nulls occur at integer multiples of the PRF (approximately
30kHz, 60kHz,...). If we can remove these nulls or push them away from the Doppler frequency
region of the radar specifications, we can avoid the blind speed problem.
17-464
Ground Clutter Mitigation with Moving Target Indication (MTI) Radar
One solution to the blind speed problem is to use a nonuniform PRF, or staggered PRFs. Adjacent
pulses are transmitted at different pulse repetition frequencies. Such configuration pushes the lower
bound of blind speeds to a much higher value. To illustrate this idea, we will use a two-staggered PRF,
and plot the frequency response of the three-pulse canceller.
Let us choose a second PRF at around 25kHz, which corresponds to a maximum unambiguous range
of 6 km.
From the plot of the staggered PRFs we can see the first blind speed corresponds to a Doppler
frequency of 150 kHz, five times larger than the uniform PRF case. Thus the target with the 30 kHz
Doppler frequency will not be suppressed.
17-465
17 Featured Examples
Now, simulate the reflected signals from the targets using the staggered PRFs.
release(clutter);
clutter.PRF = prf;
We process the pulses as before by first passing them through a matched filter and then integrating
the pulses noncoherently.
mfiltOut = matchedfilter(rxPulse);
% Use the same three-pulse canceller to suppress the clutter.
mtiseq = filter(h,1,mfiltOut,[],2);
% Noncoherent integration
mtiseq = pulsint(mtiseq(:,3:end));
mfiltOut = pulsint(mfiltOut(:,3:end));
17-466
Ground Clutter Mitigation with Moving Target Indication (MTI) Radar
The plot shows both targets are now detectable after MTI filtering, and the clutter is also removed.
Summary
With very simple operations, MTI processing can be effective at suppressing low speed clutter. A
uniform PRF waveform will miss targets at blind speeds, but this issue can be addressed by using
staggered PRFs. For clutters having a wide spectrum, the MTI processing could be poor. That type of
clutter can be suppressed using space-time adaptive processing. See the example “Introduction to
Space-Time Adaptive Processing” on page 17-231 for details.
Appendix
17-467
17 Featured Examples
Radar Definition
First we create a phased array radar. We reuse most of the subsystems built in the example
“Designing a Basic Monostatic Pulse Radar” on page 17-449. Readers are encouraged to explore the
details of radar system design through that example. A major difference is that we use a 30-by-30
uniform rectangular array (URA) in place of the original single antenna.
load BasicMonostaticRadarExampleData;
ura = phased.URA('Element',antenna,...
'Size',[30 30],'ElementSpacing',[lambda/2, lambda/2]);
% Configure the antenna elements such that they only transmit forward
ura.Element.BackBaffled = true;
17-468
Scan Radar Using a Uniform Rectangular Array
radiator.Sensor = ura;
collector.Sensor = ura;
Now we need to recalculate the transmit power. The original transmit power was calculated based on
a single antenna. For a 900-element array, the power required for each element is much less.
peak_power = 0.0065
17-469
17 Featured Examples
We also need to design the scanning schedule of the phased array. To simplify the example, we only
search in the azimuth dimension. We require the radar to search from 45 degrees to -45 degrees in
azimuth. The revisit time should be less than 1 second, meaning that the radar should revisit the
same azimuth angle within 1 second.
initialAz = 45; endAz = -45;
volumnAz = initialAz - endAz;
To determine the required number of scans, we need to know the beamwidth of the array response.
We use an empirical formula to estimate the 3-dB beamwidth.
4π
G= 2
θ
theta = 6.7703
The 3-dB beamwidth is 6.77 degrees. To allow for some beam overlap in space, we choose the scan
step to be 6 degrees.
scanstep = -6;
scangrid = initialAz+scanstep/2:scanstep:endAz;
numscans = length(scangrid);
pulsenum = int_pulsenum*numscans;
revisitTime = 0.0050
The resulting revisit time is 0.005 second, well below the prescribed upper limit of 1 second.
Target Definition
We want to simulate the pulse returns from two non-fluctuating targets, both at 0 degrees elevation.
The first target is approaching to the radar, while the second target is moving away from the radar.
tgtpos = [[3532.63; 800; 0],[2020.66; 0; 0]];
tgtvel = [[-100; 50; 0],[60; 80; 0]];
tgtmotion = phased.Platform('InitialPosition',tgtpos,'Velocity',tgtvel);
numtargets = length(target.MeanRCS);
Pulse Synthesis
Now that all subsystems are defined, we can proceed to simulate the received signals. The total
simulation time corresponds to one pass through the surveillance region. Because the reflected
17-470
Scan Radar Using a Uniform Rectangular Array
signals are received by an array, we use a beamformer pointing to the steering direction to obtain the
combined signal.
for m = 1:pulsenum
% Form transmit beam for this scan angle and simulate propagation
pulse = waveform();
[txsig,txstatus] = transmitter(pulse);
txsig = radiator(txsig,tgtang,w);
txsig = channel(txsig,sensorpos,tgtpos,sensorvel,tgtvel);
Matched Filter
To process the received signal, we first pass it through a matched filter, then integrate all pulses for
each scan angle.
17-471
17 Featured Examples
% Matched filtering
matchingcoeff = getMatchedFilter(waveform);
matchedfilter = phased.MatchedFilter(...
'Coefficients',matchingcoeff,...
'GainOutputPort',true);
[mf_pulses, mfgain] = matchedfilter(rxpulses);
mf_pulses = reshape(mf_pulses,[],int_pulsenum,numscans);
matchingdelay = size(matchingcoeff,1)-1;
sz_mfpulses = size(mf_pulses);
mf_pulses = [mf_pulses(matchingdelay+1:end) zeros(1,matchingdelay)];
mf_pulses = reshape(mf_pulses,sz_mfpulses);
% Pulse integration
int_pulses = pulsint(mf_pulses,'noncoherent');
int_pulses = squeeze(int_pulses);
% Visualize
r = v*fast_time_grid/2;
X = r'*cosd(scangrid); Y = r'*sind(scangrid);
clf;
pcolor(X,Y,pow2db(abs(int_pulses).^2));
axis equal tight
shading interp
axis off
text(-800,0,'Array');
text((max(r)+10)*cosd(initialAz),(max(r)+10)*sind(initialAz),...
[num2str(initialAz) '^o']);
text((max(r)+10)*cosd(endAz),(max(r)+10)*sind(endAz),...
[num2str(endAz) '^o']);
text((max(r)+10)*cosd(0),(max(r)+10)*sind(0),[num2str(0) '^o']);
colorbar;
17-472
Scan Radar Using a Uniform Rectangular Array
From the scan map, we can clearly see two peaks. The close one is at around 0 degrees azimuth, the
remote one at around 10 degrees in azimuth.
To obtain an accurate estimation of the target parameters, we apply threshold detection on the scan
map. First we need to compensate for signal power loss due to range by applying time varying gains
to the received signal.
range_gates = v*fast_time_grid/2;
tvg = phased.TimeVaryingGain(...
'RangeLoss',2*fspl(range_gates,lambda),...
'ReferenceLoss',2*fspl(max(range_gates),lambda));
tvg_pulses = tvg(mf_pulses);
% Pulse integration
int_pulses = pulsint(tvg_pulses,'noncoherent');
int_pulses = squeeze(int_pulses);
17-473
17 Featured Examples
We now visualize the detection process. To better represent the data, we only plot range samples
beyond 50.
N = 51;
clf;
surf(X(N:end,:),Y(N:end,:),...
pow2db(abs(int_pulses(N:end,:)).^2));
hold on;
mesh(X(N:end,:),Y(N:end,:),...
pow2db(threshold*ones(size(X(N:end,:)))),'FaceAlpha',0.8);
view(0,56);
axis off
There are two peaks visible above the detection threshold, corresponding to the two targets we
defined earlier. We can find the locations of these peaks and estimate the range and angle of each
target.
[~,peakInd] = findpeaks(int_pulses(:),'MinPeakHeight',sqrt(threshold));
[rngInd,angInd] = ind2sub(size(int_pulses),peakInd);
est_range = range_gates(rngInd); % Estimated range
est_angle = scangrid(angInd); % Estimated direction
17-474
Scan Radar Using a Uniform Rectangular Array
Doppler Estimation
Next, we want to estimate the Doppler speed of each target. For details on Doppler estimation, refer
to the example “Doppler Estimation” on page 17-295.
for m = numtargets:-1:1
[p, f] = periodogram(mf_pulses(rngInd(m),:,angInd(m)),[],256,prf, ...
'power','centered');
speed_vec = dop2speed(f,lambda)/2;
spectrum_data = p/max(p);
[~,dop_detect1] = findpeaks(pow2db(spectrum_data),'MinPeakHeight',-5);
sp(m) = speed_vec(dop_detect1); % Estimated Doppler speed
end
Finally, we have estimated all the parameters of both detected targets. Below is a comparison of the
estimated and true parameter values.
------------------------------------------------------------------------
Estimated (true) target parameters
------------------------------------------------------------------------
Range (m) Azimuth (deg) Speed (m/s)
Target 1: 3625.00 (3622.08) 12.00 (12.76) 86.01 (86.49)
Target 2: 2025.00 (2020.66) 0.00 (0.00) -59.68 (-60.00)
Summary
In this example, we showed how to simulate a phased array radar to scan a predefined surveillance
region. We illustrated how to design the scanning schedule. A conventional beamformer was used to
process the received multi-channel signal. The range, angle, and Doppler information of each target
are extracted from the reflected pulses. This information can be used in further tasks such as high
resolution direction-of-arrival estimation, or target tracking.
17-475
17 Featured Examples
System Setup
The system operates at 300 MHz, using a linear FM waveform whose maximum unambiguous range
is 48 km. The range resolution is 50 meters and the time-bandwidth product is 20.
The transmitter has a peak power of 2 kw and a gain of 20 dB. The receiver also provides a gain of 20
dB and the noise bandwidth is the same as the waveform's sweep bandwidth.
The transmit antenna array is a stationary 4-element ULA located at origin. The array is made of
vertical dipoles.
txAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
[waveform,transmitter,txmotion,radiator] = ...
helperBistatTxSetup(maxrng,rngres,tbprod,txAntenna);
The receive antenna array is also a 4-element ULA; it is located at [20000;1000;100] meters away
from the transmit antenna and is moving at a velocity of [0;20;0] m/s. Assume the elements in the
receive array are also vertical dipoles. The received antenna array is oriented so that its broadside
points back to the transmit antenna.
rxAntenna = phased.ShortDipoleAntennaElement('AxisDirection','Z');
[collector,receiver,rxmotion,rngdopresp,beamformer] = ...
helperBistatRxSetup(rngres,rxAntenna);
There are two targets present in space. The first one is a point target modeled as a sphere; it
preserves the polarization state of the incident signal. It is located at [15000;1000;500] meters away
from the transmit array and is moving at a velocity of [100;100;0] m/s.
The second target is located at [35000;-1000;1000] meters away from the transmit array and is
approaching at a velocity of [-160;0;-50] m/s. Unlike the first target, the second target flips the
polarization state of the incident signal, which means that the horizontal/vertical polarization
components of the input signal becomes the vertical/horizontal polarization components of the output
signal.
[target,tgtmotion,txchannel,rxchannel] = ...
helperBistatTargetSetup(waveform.SampleRate);
A single scattering matrix is a fairly simple polarimetric model for a target. It assumes that no matter
what the incident and reflecting directions are, the power distribution between the H and V
components is fixed. However, even such a simple model can reveal complicated target behavior in
the simulation because (1) the H and V directions vary for different incident and reflecting directions;
and (2) the orientation, defined by the local coordinate system, of the targets also affects the
polarization matching.
17-476
Simulating a Bistatic Polarimetric Radar
System Simulation
Next section simulates 256 received pulses. The receiving array is beamformed toward the two
targets. The first figure shows the system setting and how the receive array and the targets move.
The second figure shows a range-Doppler map generated for every 64 pulses received at the receiver
array.
Nblock = 64; % Burst size
dt = 1/waveform.PRF;
y = complex(zeros(round(waveform.SampleRate*dt),Nblock));
hPlots = helperBistatViewSetup(txmotion,rxmotion,tgtmotion,waveform,...
rngdopresp,y);
Npulse = Nblock*4;
for m = 1:Npulse
sigtgt(n) = target{n}(sigtx(n),fwang,bckang,tgtax(:,:,n));
end
rspeed_t = radialspeed(tgtp,tgtv,tpos,tvel);
rspeed_r = radialspeed(tgtp,tgtv,rpos,rvel);
helperBistatViewTrajectory(hPlots,tpos,rpos,tgtp);
if ~rem(m,Nblock)
rd_rng = (txrng+rxrng)/2;
rd_speed = rspeed_t+rspeed_r;
helperBistatViewSignal(hPlots,waveform,rngdopresp,y,rd_rng,...
rd_speed)
17-477
17 Featured Examples
end
end
17-478
Simulating a Bistatic Polarimetric Radar
The Range-Doppler map only shows the return from the first target. This is probably no surprise since
both the transmit and receive array are vertically polarized and the second target maps the vertically
polarized wave to horizontally polarized wave. The received signal from the second target is mostly
orthogonal to the receive array's polarization, resulting in significant polarization loss.
One may also notice that the resulting range and radial speed do not agree with the range and radial
speed of the target relative to the transmitter. This is because in a bistatic configuration, the
estimated range is actually the geometric mean of the target range relative to the transmitter and the
receiver. Similarly, the estimated radial speed is the sum of the target radial speed relative to the
transmitter and the receiver. The circle in the map shows where the targets should appear in the
range-Doppler map. Further processing is required to identify the exact location of the target, but
those are beyond the scope of this example.
Vertical dipole is a very popular choice of transmit antenna in real applications because it is low cost
and have a omnidirectional pattern. However, the previous simulation shows that if the same antenna
is used in the receiver, there is a risk that the system will miss certain targets. Therefore, a linear
polarized antenna is often not the best choice as the receive antenna in such a configuration because
no matter how the linear polarization is aligned, there always exists an orthogonal polarization. In
17-479
17 Featured Examples
case the reflected signal bears a polarization state close to that direction, the polarization loss
becomes huge.
One way to solve this issue is to use a circularly polarized antenna at the receive end. A circularly
polarized antenna cannot fully match any linear polarization. But on the other hand, the polarization
loss between a circular polarized antenna and a linearly polarized signal is 3 dB, regardless what
direction the linear polarization is in. Therefore, although it never gives the maximum return, it never
misses a target. A frequently used antenna with circular polarization is a crossed dipole antenna.
Next section shows what happens when crossed dipole antennas are used to form the receive array.
rxAntenna = phased.CrossedDipoleAntennaElement;
collector = clone(collector);
collector.Sensor.Element = rxAntenna;
helperBistatSystemRun(waveform,transmitter,txmotion,radiator,collector,...
receiver,rxmotion,rngdopresp,beamformer,target,tgtmotion,txchannel,...
rxchannel,hPlots,Nblock,Npulse);
The range-Doppler map now shows both targets at their correct locations.
17-480
Simulating a Bistatic Polarimetric Radar
Summary
This example shows a system level simulation of a bistatic polarimetric radar. The example generates
range-Doppler maps of the received signal for different transmit/receive array polarization
configurations and shows how a circularly polarized antenna can be used to avoid losing linear
polarized signals due to a target's polarization scattering property.
17-481
17 Featured Examples
Simulation Setup
First, set up the radar system with some basic parameters. The entire radar system is similar to the
one shown in the “Waveform Design to Improve Performance of an Existing Radar System” on page
17-165 example.
fs = 6e6;
bw = 3e6;
c = 3e8;
fc = 10e9;
prf = 18750;
num_pulse_int = 10;
[waveform,transmitter,radiator,collector,receiver,sensormotion,...
target,tgtmotion,channel,matchedfilter,tvg,threshold] = ...
helperRadarStreamExampleSystemSetup(fs,bw,prf,fc,c);
System Simulation
Next, run the simulation for 100 pulses. During this simulation, four time-scopes are used to observe
the signals at various stages. The first three scopes display the transmitted signal, received signal,
and the post-matched-filter and gain-adjusted signal for 10-pulses. Although the transmitted signal is
a high-power pulse train, scope 2 shows a much weaker received signal due to propagation loss. This
signal cannot be detected using the preset detection threshold. Even after matched-filtering and gain
compensation, it is still challenging to detect all three targets.
% pre-allocation
fast_time_grid = 0:1/fs:1/prf-1/fs;
num_pulse_samples = numel(fast_time_grid);
rx_pulses = complex(zeros(num_pulse_samples,num_pulse_int));
mf_pulses = complex(zeros(num_pulse_samples,num_pulse_int));
detect_pulse = zeros(num_pulse_samples,1);
% simulation loop
for m = 1:10*num_pulse_int
17-482
Stream and Accelerate Simulation of Radar System
txsig = channel(txsig,sensorpos,tgtpos,sensorvel,tgtvel);
% Detection processing
mf_pulses(:,nn) = matchedfilter(rx_pulses(:,nn));
mf_pulses(:,nn) = tvg(mf_pulses(:,nn));
helperRadarStreamDisplay(pulse,abs(rx_pulses(:,nn)),...
abs(mf_pulses(:,nn)),detect_pulse,...
sqrt(threshold)*ones(num_pulse_samples,1));
end
17-483
17 Featured Examples
Because radar systems require intensive processing, simulation speed is a major concern. After you
have run 100 pulse to check out your code, you may want to run 1000 pulses. When you run the
simulation in interpreted MATLAB mode, you can measure the elapsed time using:
tic;
helperRadarStreamRun;
time_interpreted = toc
time_interpreted =
5.7406
17-484
Stream and Accelerate Simulation of Radar System
If the simulation is too slow, you can speed it up using MATLAB Coder. MATLAB Coder can generate
compiled MATLAB code resulting in significant improvement in processing speed. In this example,
MATLAB Coder generates a helperRadarStreamRun_mex function from the helperRadarStreamRun
function. The command used is shown below:
17-485
17 Featured Examples
codegen helperRadarStreamRun.m
tic;
helperRadarStreamRun_mex;
time_compiled = toc
time_compiled =
1.4398
17-486
Stream and Accelerate Simulation of Radar System
Speedup improvement depends on several factors such as machine CPU speed and available memory
but is typically increased 3-4 times. Note that the visualization of data using scopes is not sped up by
MATLAB Coder and is still handled by the MATLAB interpreter. If visualizations are not critical to
your simulation, then you can remove them for further speed improvement.
1 The visualization capability in generated code is very limited compared to what is available in
MATLAB. If you need to keep the visualization in the simulation, use the coder.extrinsic
trick; but this slows down the simulation.
2 The generated code does not allow dynamic changes of variable type and size compared to the
original MATLAB code. The generated code is often optimized for a particular variable type and
size; therefore, any change in a variable type and size, which can be caused, for example, by a
change in PRF, requires a recompile.
3 The simulation speed benefit becomes more important when the MATLAB simulation time is long.
If MATLAB simulation finishes in a few seconds, you do not gain much by generating the code
from the original MATLAB simulation. As mentioned in the previous bullet, it is often necessary
to recompile the code when parameters change. Therefore, it might be better to first use
MATLAB simulation to identify appropriate parameter values and then use generated code to run
long simulations.
17-487
17 Featured Examples
Summary
This example shows how to perform the radar system simulation in streaming mode. It also shows
how the code generation can be used to speed up the simulation. The tradeoff between using the
generated code and MATLAB code is discussed at the end.
17-488
Visualizing Radar and Target Trajectories in System Simulation
Introduction
A radar system simulation often includes many moving objects. For example, both the radar and the
targets can be in motion. In addition, each moving object may have its own orientations, so the
bookkeeping becomes more and more challenging when more players present in the simulation.
Phased Array System Toolbox™ provides a scenario viewer to help visualize how radars and targets
are moving in space. Through the scenario viewer, one can follow the trajectory of each moving
platform and examine the relative motion between the radar and the target.
Visualize Trajectories
In the first example, the scenario viewer is used to visualize the trajectories of a radar and a target.
Assume that the radar is circling around the origin at 3 km away. The plane with radar flies at 250
m/s (about 560 mph), and makes a circle approximately every 60 seconds.
v = 250;
deltaPhi = 360/60;
sensormotion = phased.Platform(...
'InitialPosition',[0;-3000;500],...
'VelocitySource','Input port',...
'InitialVelocity',[0;v;0]);
The target is traveling along a straight road with a velocity of 30 m/s along the x-axis. which is
approximately 67 mph.
tgtmotion = phased.Platform('InitialPosition',[0;0;0],...
'Velocity',[30;0;0]);
The viewer is set to update at every 0.1 second. For the simplest case, the beam is not shown in the
viewer.
tau = 0.5;
sceneview = phased.ScenarioViewer('ShowBeam','None');
for m = 1:tau:60
[sensorpos,sensorvel] = sensormotion(tau,...
v*[cosd(m*deltaPhi);sind(m*deltaPhi);0]);
[tgtpos,tgtvel] = tgtmotion(tau);
sceneview(sensorpos,sensorvel,tgtpos,tgtvel);
drawnow;
end
17-489
17 Featured Examples
The next natural step is to visualize the radar beams together with the trajectories in the viewer. The
following example shows how to visualize two radars and three targets moving in space. In particular.
the first radar has a beam tracking the first target.
First, setup radars and targets. Note the first radar and the first target match what used in the
previous section.
sensormotion = phased.Platform(...
'InitialPosition',[0 0;-3000 500;500 1], ...
'VelocitySource','Input port', ...
'InitialVelocity',[0 100;v 0;0 0], ...
'OrientationAxesOutputPort', true);
tgtmotion = phased.Platform(...
'InitialPosition',[0 2000.66 3532.63;0 0 500;0 500 500],...
'Velocity',[30 120 -120; 0 0 -20; 0 0 60],...
'OrientationAxesOutputPort', true);
To properly point the beam, the scenario viewer needs to know the orientation information of radars
and targets. Such information can be obtained from these moving platforms by setting the
OrientationAxesOutputPort property to true at each simulation step, as shown in the code above. To
pass this information to the viewer, set the scenario viewer's OrientationInputPort property to true.
17-490
Visualizing Radar and Target Trajectories in System Simulation
Note the displayed beam has a beamwidth of 5 degrees and a length of 3 km. The camera perspective
is also adjusted to visualize all trajectories more clearly.
for m = 1:60
[sensorpos,sensorvel,sensoraxis] = sensormotion(tau,...
[v*[cosd(m*deltaPhi);sind(m*deltaPhi);0] [100; 0; 0]]);
[tgtpos,tgtvel,tgtaxis] = tgtmotion(tau);
% Radar 1 tracks Target 1
[lclrng, lclang] = rangeangle(tgtpos(:,1),sensorpos(:,1),...
sensoraxis(:,:,1));
% Update beam direction
sceneview.BeamSteering = [lclang [0;0]];
sceneview(sensorpos,sensorvel,sensoraxis,tgtpos,tgtvel,tgtaxis);
drawnow;
end
17-491
17 Featured Examples
The scenario viewer can also be combined together with other visualizations to provide more
information of the system under simulation. Next example uses a scenario viewer together with a
range time intensity (RTI) scope and a Doppler time intensity (DTI) scope so an engineer can examine
whether the estimated ranges and range rates of targets match the ground truth.
The example uses the radar system created in the “Designing a Basic Monostatic Pulse Radar” on
page 17-449 example.
load BasicMonostaticRadarExampleData.mat
sensormotion = phased.Platform(...
'InitialPosition',[0; 0; 10],...
'Velocity',[0; 0; 0]);
target = phased.RadarTarget(...
17-492
Visualizing Radar and Target Trajectories in System Simulation
channel = phased.FreeSpace(...
'SampleRate',fs,...
'TwoWayPropagation',true,...
'OperatingFrequency',fc);
Once the echo arrives at the receiver, a matched filter and a pulse integrator is used to perform the
range estimation.
matchingcoeff = getMatchedFilter(waveform);
matchingdelay = size(matchingcoeff,1)-1;
matchedfilter = phased.MatchedFilter(...
'Coefficients',matchingcoeff,...
'GainOutputPort',true);
prf = waveform.PRF;
fast_time_grid = unigrid(0,1/fs,1/prf,'[)');
rangeGates = c*fast_time_grid/2;
lambda = c/fc;
max_range = 5000;
tvg = phased.TimeVaryingGain(...
'RangeLoss',2*fspl(rangeGates,lambda),...
'ReferenceLoss',2*fspl(max_range,lambda));
num_pulse_int = 10;
Because it is unnecessary to monitor the trajectory at the pulse repetition rate, this example assumes
that the system reads the radar measurement at a rate of 20 Hz. The example uses a scenario viewer
to monitor the scene and a range time intensity (RTI) plot as well as a Doppler time intensity (DTI)
plot to examine the estimated range and range rate value.
r_update = 20;
sceneview = phased.ScenarioViewer('UpdateRate',r_update,...
'Title','Monostatic Radar with Three Targets');
nfft = 128;
df = prf/nfft;
dtiscope = phased.IntensityScope(...
'Name','Doppler-Time Intensity Scope',...
'XLabel','Velocity (m/sec)', ...
'XResolution',dop2speed(df,lambda)/2, ...
'XOffset', dop2speed(-prf/2,lambda)/2, ...
'TimeResolution',1/r_update,'TimeSpan',5,'IntensityUnits','dB');
Next section performs the system simulation and produces the visualizations.
17-493
17 Featured Examples
for k = 1:100
for m = 1:num_pulse_int
% Update sensor and target positions
[sensorpos,sensorvel] = sensormotion(1/prf);
[tgtpos,tgtvel] = tgtmotion(1/prf);
rxpulses = matchedfilter(rxpulses);
rxpulses = tvg(rxpulses);
rx_int = pulsint(rxpulses,'noncoherent');
% display RTI
rtiscope(rx_int);
% display DTI
rx_dop = mean(fftshift(...
abs(fft(rxpulses,nfft,2)),2));
dtiscope(rx_dop.');
% display scene
sceneview(sensorpos,sensorvel,tgtpos,tgtvel);
hide(dtiscope);
hide(rtiscope);
17-494
Visualizing Radar and Target Trajectories in System Simulation
hide(sceneview);
show(rtiscope);
17-495
17 Featured Examples
Both the scenario viewer and the RTI are updated during the simulation so one can easily verify
whether the simulation is running as expected and whether the range estimation matches the ground
truth while the simulation is running.
hide(rtiscope);
show(dtiscope);
17-496
Visualizing Radar and Target Trajectories in System Simulation
Similarly, the DTI provides the range rate estimates of each target.
Conclusions
This example describes different ways to visualize the radar and target's trajectories. Such
visualizations help provide an overall picture of the system dynamics.
17-497
17 Featured Examples
This model simulates a simple end-to-end monostatic radar. Using the transmitter block without the
narrowband transmit array block is equivalent to modeling a single isotropic antenna element.
Rectangular pulses are amplified by the transmitter block then propagated to and from a target in
free-space. Noise and amplification are then applied in the receiver preamp block to the return
signal, followed by a matched filter. Range losses are compensated for and the pulses are
noncoherently integrated. Most of the design specifications are derived from the “Designing a Basic
Monostatic Pulse Radar” on page 17-449 example provided for System objects.
The model consists of a transceiver, a channel, and a target. The blocks that corresponds to each
section of the model are:
Transceiver
17-498
End-to-End Monostatic Radar
Channel
• Freespace - Applies propagation delays, losses and Doppler shifts to the pulses. One block is
used for the transmitted pulses and another one for the reflected pulses. The Freespace blocks
require the positions and velocities of the radar and the target. Those are supplied using the Goto
and From blocks.
Target
• Target - Subsystem reflects the pulses according to the specified RCS. This subsystem includes a
Platform block that models the speed and position of the target which are supplied to the
Freespace blocks using the Goto and From blocks. In this example the target is stationary and
positioned 1998 meters from the radar.
Several dialog parameters of the model are calculated by the helper function
helperslexMonostaticRadarParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The figure below shows the range of the target. Target range is computed from the round-trip delay
of the reflected pulse. The delay is measured from the peak of the matched filter output. We can see
that the target is approximately 2000 meters from the radar. This range is within the radar's 50-meter
range resolution from the actual range.
17-499
17 Featured Examples
This model estimates the range of four stationary targets using a monostatic radar. The radar
transceiver uses a 4-element uniform linear antenna array (ULA) for improved directionality and gain.
A beamformer is also included in the receiver. The targets are positioned at 1988, 3532, 3845 and
1045 meters from the radar.
17-500
End-to-End Monostatic Radar
• Narrowband Tx Array - Models an antenna array for transmitting narrowband signals. The
antenna array is configured using the "Sensor Array" tab of the block's dialog panel. The
Narrowband Tx Array block models the transmission of the pulses through the antenna array in
the four directions specified using the Ang port. The output of this block is a matrix of four
columns. Each column corresponds to the pulses propagated towards the directions of the four
targets.
17-501
17 Featured Examples
• Narrowband Rx Array - Models an antenna array for receiving narrowband signals. The array is
configured using the "Sensor Array" tab of the block's dialog panel. The block receives pulses from
the four directions specified using the Ang port. The input of this block is a matrix of four
columns. Each column corresponds to the pulses propagated from the direction of each target.
The output of the block is a matrix of 4 columns. Each column corresponds to the signal received
at each antenna element.
• Range Angle- Calculates the angles between the radar and the targets. The angles are used by
the Narrowband Tx Array and the Narrowband Rx Array blocks to determine in which
directions to model the pulses' transmission or reception.
• Phase Shift Beamformer - Beamforms the output of the Receiver Preamp. The input to the
beamformer is a matrix of 4 columns, one column for the signal received at each antenna element.
The output is a beamformed vector of the received signal.
This example illustrates how to use single Platform, Freespace and Target blocks to model all
four round-trip propagation paths. In the Platform block, the initial positions and velocity
parameters are specified as three-by-four matrices. Each matrix column corresponds to a different
target. Position and velocity inputs to the Freespace block come from the outputs of the Platform
block as three-by-four matrices. Again, each matrix column corresponds to a different target. The
signal inputs and outputs of the Freespace block have four columns, one column for the propagation
17-502
End-to-End Monostatic Radar
path to each target. The Freespace block has two-way propagation setting enabled. The "Mean
radar cross section" (RCS) parameter of the Target block is specified as a vector of four elements
representing the RCS of each target.
Several dialog parameters of the model are calculated by the helper function
helperslexMonostaticRadarMultipleTargetsParam. To open the function from the model, click on
Modify Simulation Parameters block. This function is executed once when the model is loaded.
It exports to the workspace a structure whose fields are referenced by the dialogs. To modify any
parameters, either change the values in the structure at the command prompt or edit the helper
function and rerun it to update the parameter structure.
The figure below shows the detected ranges of the targets. Target ranges are computed from the
round-trip time delay of the reflected signals from the targets. We can see that the targets are
approximately 2000, 3550, and 3850 meters from the radar. These results are within the radar's 50-
meter range resolution from the actual range.
17-503
17 Featured Examples
This example expands on the narrowband monostatic radar system explored in the “End-to-End
Monostatic Radar” on page 17-498 example by modifying it for wideband radar simulation. For
wideband signals, both propagation losses and target RCS can vary considerably across the system's
bandwidth. It is for this reason that narrowband models cannot be used, as they only model
propagation and target reflections at a single frequency. Instead, wideband models divide the
system's bandwidth into multiple subbands. Each subband is then modeled as a narrowband signal
and the received signals within each subband are recombined to determine the response across the
entire system's bandwidth.
The model consists of a transceiver, a channel, and a target. The blocks that correspond to each
section of the model are:
Transceiver
17-504
Modeling a Wideband Monostatic Radar in a Multipath Environment
• Signal Processing - Subsystem performs stretch processing, Doppler processing, and noise
floor estimation.
• Matrix Viewer - Displays the processed pulses as a function of the measured range, radial
speed, and estimated signal power to interference plus noise power ratio (SINR).
• Stretch Processor - Dechirps the received signal by mixing it in the analog domain with the
transmitted linear FM waveform delayed to a selected reference range. A more detailed
discussion on stretch processing is available in the Range Estimation Using Stretch Processing
example.
• Decimator - Subsystem models the sample rate of the analog-to-digital converter (ADC) by
reducing the simulation's sample rate according to the bandwidth required by the range span
selected in the stretch processor.
• Buffer CPI - Subsystem collects multiple pulse repetition intervals (PRIs) to form a coherent
processing interval (CPI), enabling radial speed estimation through Doppler processing.
• Range-Doppler Response - Computes DFTs along the range and Doppler dimensions to
estimate the range and radial speed of the received pulses.
• CA CFAR 2-D - Estimates the noise floor of the received signals using the cell-averaging (CA)
method in both range and Doppler.
• Compute SINR - Subsystem normalizes the received signal using the CFAR detector's computed
threshold, returning the estimated SINR in decibels (dB).
Channel
• Wideband Two-Ray - Applies propagation delays, losses, Doppler shifts and multipath reflections
off of a flat ground to the pulses. One block is used for the transmitted pulses and another one for
the reflected pulses. The Wideband Two-Ray blocks require the positions and velocities of the
radar and the target. Those are supplied using the Goto and From blocks.
Target Subsystem
The Target subsystem models the target's motion and reflects the pulses according to the wideband
RCS model and the target's aspect angle presented to the radar. In this example, the target is
positioned 3000 meters from the wideband radar and is moving away from the radar at 100 m/s.
17-505
17 Featured Examples
• Platform - Used to model the target's motion. The target's position and velocity values are used
by the Wideband Two-Ray Channel blocks to model propagation and by the Range Angle
block to compute the signal's incident angles at the target's location.
• Range Angle - Computes the propagated signal's incident angles in azimuth and elevation at the
target's location.
• Wideband Backscatter Target - Models the wideband reflections of the target to the incident
pulses. The extended wideband target model introduced in the Modeling Target Radar Cross
Section example is used for this simulation.
Several dialog parameters of the model are calculated by the helper function
helperslexWidebandMonostaticRadarParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports
to the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
The figure below shows the range and radial speed of the target. Target range is computed from the
round-trip delay of the reflected pulses. The target's radial speed is estimated by using the DFT to
compare the phase progression of the received target returns across the coherent pulse interval
(CPI). The target's range and radial speed are measured from the peak of the stretch and Doppler
processed output.
17-506
Modeling a Wideband Monostatic Radar in a Multipath Environment
Although only a single target was modeled in this example, three target returns are observed in the
upper right-hand portion of the figure. The multipath reflections along the transmit and receive paths
give rise to the second and third target returns, often referred to as the single- and double-bounce
returns respectively. The expected range and radial speed for the target is computed from the
simulation parameters.
tgtRange = rangeangle(paramWidebandRadar.targetPos,...
paramWidebandRadar.sensorPos)
tgtRange =
3000
tgtSpeed = radialspeed(...
paramWidebandRadar.targetPos,paramWidebandRadar.targetVel,...
paramWidebandRadar.sensorPos,paramWidebandRadar.sensorVel)
tgtSpeed =
-100
17-507
17 Featured Examples
This expected range and radial speed are consistent with the simulated results in the figure above.
The expected separation between the multipath returns can also be found. The figure below
illustrates the line-of-sight and reflected path geometries.
The modeled geometric parameters for this simulation are defined as follows.
zr = paramWidebandRadar.targetPos(3);
zs = paramWidebandRadar.sensorPos(3);
Rlos = tgtRange;
Rrp =
17-508
Modeling a Wideband Monostatic Radar in a Multipath Environment
3.0067e+03
For a monostatic system, the single bounce return can traverse two different paths.
1
Radar Target Radar
2
Radar Target Radar
The range separation between all of the multipath returns is then found to be the difference between
the observed and line-of-sight ranges.
Rdelta = (Rrp-Rlos)/2
Rdelta =
3.3296
Which matches the multipath range separation observed in the simulated results.
Summary
This example demonstrated how an end-to-end wideband radar system can be modeled within
Simulink®. The variation of propagation losses and target RCS across the system's bandwidth
required wideband propagation and target models to be used.
The signal to interference plus noise ratio (SINR) of the received target returns was estimated using
the CA CFAR 2-D block. The CFAR estimator used cell-averaging to estimate the noise and
interference power in the vicinity of the target returns which enabled calculation of the received
signal's SINR.
The target was modeled in a multipath environment using the Wideband Two-Ray Channel, which
gave rise to three target returns observed by the radar. These returns correspond to the line-of-sight,
single-bounce, and double-bounce paths associated with the two-way signal propagation between the
monostatic radar and the target. The simulated separation of the multipath returns in range was
shown to match the expected separation computed from the modeled geometry.
17-509
17 Featured Examples
Introduction
Several examples, such as “End-to-End Monostatic Radar” on page 17-498 and “Automotive Adaptive
Cruise Control Using FMCW and MFSK Technology” on page 17-379 have shown that one can build
end-to-end radar systems in Simulink using Phased Array System Toolbox. In many cases, once the
system model is built, the next step could be adding more fidelity in different components. A popular
candidate for such a component is the RF front end. One advantage of modeling the system in
Simulink is the capability of performing multidomain simulations.
The following sections show two examples of incorporating RF Blockset modeling capability in radar
systems built with Phased Array System Toolbox.
The first model is adopted from example “End-to-End Monostatic Radar” on page 17-498 which
simulates a monostatic pulse radar with one target. From the diagram itself, the model below looks
identical to the model shown in that example.
17-510
Modeling RF Front End in Radar System Simulation
When the model is executed, the resulting plot is also the same.
However, a deeper look in the transmitter subsystem shows that now the transmitter is modeled by
power amplifiers from RF Blockset.
17-511
17 Featured Examples
With these changes, the model is capable of simulating RF behaviors. For example, the simulation
result shown above assumes a perfect power amplifier. In real applications, the amplifier will suffer
many nonlinearities. If one sets the IP3 of the transmitter to 70 dB and run the simulation again, the
peak corresponding to the target is no longer as dominant. This gives the engineer some knowledge
regarding the system's performance under different situations.
The second example is adopted from “Automotive Adaptive Cruise Control Using FMCW and MFSK
Technology” on page 17-379. However, this model uses a triangle sweep waveform instead so the
system can estimate range and speed simultaneously. At the top level, the model is similar to what
gets built from Phased Array System Toolbox. Once executed, the model shows the estimated range
and speed values that matches the distance and relative speed of the target car.
However, similar to the first example, the transmitter and receiver subsystems are now built with RF
Blockset blocks.
17-512
Modeling RF Front End in Radar System Simulation
In a continuous wave radar system, part of the transmitted waveform is used as a reference to
dechirp the received target echo. From the diagrams above, one can see that the transmitted
waveform is sent to the receiver via a coupler and the dechirp is performed via an I/Q mixer.
Therefore, by adjusting parameters in those RF components, higher simulation fidelity can be
achieved.
Summary
This example shows two radar models that are originally built with Phased Array System Toolbox and
later incorporated RF models from RF Blockset. The simulation fidelity is greatly improved by
combining the two products together.
17-513
17 Featured Examples
The following model shows an end-to-end simulation of a bistatic radar system. The system is divided
into three parts: the transmitter subsystem, the receiver subsystem, and the targets and their
propagation channels. The model shows the signal flowing from the transmitter, through the channels
to the targets and reflected back to the receiver. Range-Doppler processing is then performed at the
receiver to generate the range-Doppler map of the received echoes.
Transmitter
• Linear FM - Creates linear FM pulse as the transmitter waveform. The signal sweeps a 3 MHz
bandwidth, corresponding to a 50-meter range resolution.
• Radar Transmitter - Amplifies the pulse and simulates the transmitter motion. In this case, the
transmitter is mounted on a stationary platform located at the origin. The operating frequency of
the transmitter is 300 MHz.
17-514
Simulating a Bistatic Radar with Two Targets
Targets
This example includes two targets with similar configurations. The targets are mounted on the
moving platforms.
• Tx to Targets Channel - Propagates signal from the transmitter to the targets. The signal
inputs and outputs of the channel block have two columns, one column for the propagation path to
each target.
• Targets to Rx Channel - Propagates signal from the targets to the receiver. The signal inputs
and outputs of the channel block have two columns, one column for the propagation path from
each target.
• Targets - Reflects the incident signal and simulates both targets motion. This first target with an
RCS of 2.5 square meters is approximately 15 km from the transmitter and is moving at a speed of
141 m/s. The second target with an RCS of 4 square meters is approximately 35 km from the
transmitter and is moving at a speed of 168 m/s. The RCS of both targets are specified as a vector
of two elements in the Mean radar cross section parameter of the underlying Target block.
Receiver
• Radar Receiver - Receives the target echo, adds receiver noise, and simulates the receiver
motion. The distance between the transmitter and the receiver is 20 km, and the receiver is
moving at a speed of 20 m/s. The distance between the receiver and the two targets are
approximately 5 km and 15 km, respectively.
• Range-Doppler Processing - Computes the range-Doppler map of the received signal. The
received signal is buffered to form a 64-pulse burst which is then passed to a range-Doppler
processor. The processor performs a matched filter operation along the range dimension and an
FFT along the Doppler dimension.
17-515
17 Featured Examples
Several dialog parameters of the model are calculated by the helper function
helperslexBistaticParam. To open the function from the model, click on Modify Simulation
Parameters block. This function is executed once when the model is loaded. It exports to the
workspace a structure whose fields are referenced by the dialogs. To modify any parameters, either
change the values in the structure at the command prompt or edit the helper function and rerun it to
update the parameter structure.
The figure below shows the two targets in the range-Doppler map.
17-516
Simulating a Bistatic Radar with Two Targets
Because this is a bistatic radar, the range-Doppler map above actually shows the target range as the
arithmetic mean of the distances from the transmitter to the target and from the target to the
receiver. Therefore, the expected range of the first target is approximately 10 km, ((15+5)/2) and for
second target approximately 25 km, ((35+15)/2). The range-Doppler map whos these two values as
the measured values.
Similarly, the Doppler shift of a target in a bistatic configuration is the sum of the target's Doppler
shifts relative to the transmitter and the receiver. The relative speeds to the transmitter are -106.4
m/s for the first target and 161.3 m/s for the second target while the relative speeds to the receiver
are 99.7 m/s for the first target and 158.6 m/s for second target. Thus, the range-Doppler map shows
the overall relative speeds as -6.7 m/s (-24 km/h) and 319.9 m/s (1152 km/h) for the first target and
the second target, respectively, which agree with the expected sum values.
Summary
This example shows an end-to-end bistatic radar system simulation with two targets. It explains how
to analyze the target return by plotting a range-Doppler map.
17-517
17 Featured Examples
This model simulates a monostatic radar that searches for targets with an unambiguous range of 5
km. If the radar detects a target within 2 km, then it will switch to a higher PRF to only look for
targets with 2 km range and enhance its capability to detect high speed targets.
The model consists of two main subsystem, a radar system and its corresponding controller. From the
top level, the radar system resides in a Simulink function block. Note that the underlying function is
specified in the figure as [dt,r] = f(idx). This means that the radar takes one input, idx, which
specifies the index of selected PRF of the transmitted signal and returns two outputs: dt, the time the
next pulse should be transmitted and r, the detected target range of the radar system. The radar
controller, shown in the following figure, uses the detection and the time to schedule when and what
to transmit next.
17-518
Waveform Scheduling Based on Target Detection
Radar System
The radar system resides in a Simulink function block and is shown in the following figure.
The system is very similar to what is used in the “Waveform Scheduling Based on Target Detection”
on page 17-518 example with the following notable difference:
1 The waveform block is no longer a source block. Instead, it takes an input, idx, to select which
PRF to use. The available PRF values are specified in the PRF parameter of the waveform dialog.
2 The output of the waveform is also used to compute the time, dt, that the next pulse should be
transmitted. Note that in this case, the time interval is proportional to the length of the
transmitted signal.
3 At the end of the signal processing chain, the target range is estimated and returned in r. The
controller will use this information to decide which PRF to choose for next transmission.
4 Once the model is compiled, notice that the signal passing through the system can vary in length
because of a possible change of the waveform PRF. In addition, because the sample rate cannot
be derived inside a Simulink function subsystem, the sample rate is specified in the block
diagrams, such as the Tx and Rx paths, the receiver preamp, and other blocks.
17-519
17 Featured Examples
Several dialog parameters of the model are calculated by the helper function
helperslexPRFSelectionParam. To open the function from the model, click on Modify Simulation
Parameters block. This function is executed once when the model is loaded. It exports to the
workspace a structure whose fields are referenced by the dialogs. To modify any parameters, either
change the values in the structure at the command prompt or edit the helper function and rerun it to
update the parameter structure.
The figure below shows the detected ranges of the targets. Target ranges are computed from the
round-trip time delay of the reflected signals from the targets. At the simulation start, the radar
detects two targets, one is slightly over 2 km away and the other one is at approximately 3.5 km
away.
17-520
Waveform Scheduling Based on Target Detection
After some time, the first target moves into the 2 km zone and triggers a change of PRF. Then the
received signal only covers the range up to 2 km. The display is zero padded to ensure that the plot
limits do not change. Notice that the target at 3.5 km gets folded to the 1.5 km range due to range
ambiguity.
17-521
17 Featured Examples
Summary
This example shows how to build a radar system in Simulink® that dynamically changes its PRF
based on the target detection range. A staggered PRF system can be modeled similarly.
17-522
Underwater Target Detection with an Active Sonar System
Underwater Environment
Multiple propagation paths are present between the sound source and target in a shallow water
environment. In this example, five paths are assumed in a channel with a depth of 100 meters and a
constant sound speed of 1520 m/s. Use a bottom loss of 0.5 dB in order to highlight the effects of the
multiple paths.
Define the properties of the underwater environment, including the channel depth, the number of
propagation paths, the propagation speed, and the bottom loss.
numPaths = 5;
propSpeed = 1520;
channelDepth = 100;
isopath{1} = phased.IsoSpeedUnderwaterPaths(...
'ChannelDepth',channelDepth,...
'NumPathsSource','Property',...
'NumPaths',numPaths,...
'PropagationSpeed',propSpeed,...
'BottomLoss',0.5,...
'TwoWayPropagation',true);
isopath{2} = phased.IsoSpeedUnderwaterPaths(...
'ChannelDepth',channelDepth,...
'NumPathsSource','Property',...
'NumPaths',numPaths,...
'PropagationSpeed',propSpeed,...
'BottomLoss',0.5,...
'TwoWayPropagation',true);
Next, create a multipath channel for each target. The multipath channel propagates the waveform
along the multiple paths. This two-step process is analogous to designing a filter and using the
resulting coefficients to filter a signal.
fc = 20e3; % Operating frequency (Hz)
channel{1} = phased.MultipathChannel(...
'OperatingFrequency',fc);
channel{2} = phased.MultipathChannel(...
'OperatingFrequency',fc);
Sonar Targets
The scenario has two targets. The first target is more distant but has a larger target strength, and the
second is closer but has a smaller target strength. Both targets are isotropic and stationary with
respect to the sonar system.
tgt{1} = phased.BackscatterSonarTarget(...
'TSPattern',-5*ones(181,361));
17-523
17 Featured Examples
tgt{2} = phased.BackscatterSonarTarget(...
'TSPattern',-15*ones(181,361));
tgtplat{1} = phased.Platform(...
'InitialPosition',[500; 1000; -70],'Velocity',[0; 0; 0]);
tgtplat{2} = phased.Platform(...
'InitialPosition',[500; 0; -40],'Velocity',[0; 0; 0]);
The target positions, along with the channel properties, determine the underwater paths along which
the signals propagate. Plot the paths between the sonar system and each target. Note that the z-
coordinate determines depth, with zero corresponding to the top surface of the channel, and the
distance in the x-y plane is plotted as the range between the source and target.
Transmitted Waveform
Next, specify a rectangular waveform to transmit to the targets. The maximum target range and
desired range resolution define the properties of the waveform.
17-524
Underwater Target Detection with an Active Sonar System
Update the sample rate of the multipath channel with the transmitted waveform sample rate.
channel{1}.SampleRate = fs;
channel{2}.SampleRate = fs;
Transmitter
The transmitter consists of a hemispherical array of back-baffled isotropic projector elements. The
transmitter is located 60 meters below the surface. Create the array and view the array geometry.
plat = phased.Platform(...
'InitialPosition',[0; 0; -60],...
'Velocity',[0; 0; 0]);
proj = phased.IsotropicProjector(...
'FrequencyRange',[0 30e3],'VoltageResponse',80,'BackBaffled',true);
[ElementPosition,ElementNormal] = helperSphericalProjector(8,fc,propSpeed);
projArray = phased.ConformalArray(...
'ElementPosition',ElementPosition,...
'ElementNormal',ElementNormal,'Element',proj);
viewArray(projArray,'ShowNormals',true);
17-525
17 Featured Examples
View the pattern of the array at zero degrees in elevation. The directivity shows peaks in azimuth
corresponding to the azimuth position of the array elements.
pattern(projArray,fc,-180:180,0,'CoordinateSystem','polar',...
'PropagationSpeed',propSpeed);
17-526
Underwater Target Detection with an Active Sonar System
Receiver
The receiver consists of a hydrophone and an amplifier. The hydrophone is a single isotropic element
and has a frequency range from 0 to 30 kHz, which contains the operating frequency of the multipath
channel. Specify the hydrophone voltage sensitivity as -140 dB.
hydro = phased.IsotropicHydrophone(...
'FrequencyRange',[0 30e3],'VoltageSensitivity',-140);
Thermal noise is present in the received signal. Assume that the receiver has 20 dB of gain and a
noise figure of 10 dB.
rx = phased.ReceiverPreamp(...
'Gain',20,...
'NoiseFigure',10,...
'SampleRate',fs,...
'SeedSource','Property',...
'Seed',2007);
In an active sonar system, an acoustic wave is propagated to the target, scattered by the target, and
received by a hydrophone. The radiator generates the spatial dependence of the propagated wave
due to the array geometry. Likewise, the collector combines the backscattered signals received by the
hydrophone element from the far-field target.
radiator = phased.Radiator('Sensor',projArray,'OperatingFrequency',...
fc,'PropagationSpeed',propSpeed);
17-527
17 Featured Examples
collector = phased.Collector('Sensor',hydro,'OperatingFrequency',fc,...
'PropagationSpeed',propSpeed);
Next, transmit the rectangular waveform over ten repetition intervals and simulate the signal
received at the hydrophone for each transmission.
for j = 1:xmits
% Compute the radiated signals. Steer the array towards the target.
tsig = radiator(x,srcAng);
% Target
tsig = tgt{i}(tsig,tgtAng);
% Collector
rsig = collector(tsig,srcAng);
rx_pulses(:,j) = rx_pulses(:,j) + ...
rx(rsig);
end
end
Plot the magnitude of non-coherent integration of the received signals to locate the returns of the two
targets.
figure
rx_pulses = pulsint(rx_pulses,'noncoherent');
plot(t,abs(rx_pulses))
grid on
xlabel('Time (s)')
ylabel('Amplitude (V)')
title('Integrated Received Pulses')
17-528
Underwater Target Detection with an Active Sonar System
The targets, which are separated a relatively large distance, appear as distinct returns. Zoom in on
the first return.
xlim([0.55 0.85])
17-529
17 Featured Examples
The target return is the superposition of pulses from multiple propagation paths, resulting in multiple
peaks for each target. The resulting peaks could be misinterpreted as additional targets.
In the previous section, the sound speed was constant as a function of channel depth. In contrast, a
ray tracing program like Bellhop can generate acoustic paths for spatially-varying sound speed
profiles. You can use the path information generated by Bellhop to propagate signals via the
multipath channel. Simulate transmission between an isotropic projector and isotropic hydrophone in
a target-free environment with the 'Munk' sound speed profile. The path information is contained in a
Bellhop arrival file (MunkB_eigenray_Arr.arr).
Bellhop Configuration
In this example, the channel is 5000 meters in depth. The source is located at a depth of 1000 meters
and the receiver is located at a depth of 800 meters. They are separated by 100 kilometers in range.
Import and plot the paths computed by Bellhop.
[paths,dop,aloss,rcvAng,srcAng] = helperBellhopArrivals(fc,6,false);
helperPlotPaths('MunkB_eigenray')
17-530
Underwater Target Detection with an Active Sonar System
For this scenario, there are two direct paths with no interface reflections, and eight paths with
reflections at both the top and bottom surfaces. The sound speed in the channel is lowest at
approximately 1250 meters in depth, and increases towards the top and bottom of the channel, to a
maximum of 1550 meters/second.
Create a new channel and receiver to use with data from Bellhop.
release(collector)
channelBellhop = phased.MultipathChannel(...
'SampleRate',fs,...
'OperatingFrequency',fc);
rx = phased.ReceiverPreamp(...
'Gain',10,...
'NoiseFigure',10,...
'SampleRate',fs,...
'SeedSource','Property',...
'Seed',2007);
17-531
17 Featured Examples
Bellhop Simulation
x = repmat(wav(),1,size(paths,2));
xmits = 10;
rx_pulses = zeros(size(x,1),xmits);
t = (0:size(x,1)-1)/fs;
for j = 1:xmits
% Projector
tsig = x.*proj(fc,srcAng)';
% Collector
rsig = collector(tsig,rcvAng);
rx_pulses(:,j) = rx_pulses(:,j) + ...
rx(rsig);
end
figure
rx_pulses = pulsint(rx_pulses,'noncoherent');
plot(t,abs(rx_pulses))
grid on
xlim([66 70])
xlabel('Time (s)')
ylabel('Amplitude (V)')
title('Integrated Received Pulses')
17-532
Underwater Target Detection with an Active Sonar System
The transmitted pulses appear as peaks in the response. Note that the two direct paths, which have
no interface reflections, arrive first and have the highest amplitude. In comparing the direct path
received pulses, the second pulse to arrive has the higher amplitude of the two, indicating a shorter
propagation distance. The longer delay time for the shorter path can be explained by the fact that it
propagates through the slowest part of the channel. The remaining pulses have reduced amplitude
compared to the direct paths due to multiple reflections at the channel bottom, each contributing to
the loss.
Summary
In this example, acoustic pulses were transmitted and received in shallow-water and deep-water
environments. Using a rectangular waveform, an active sonar system detected two well-separated
targets in shallow water. The presence of multiple paths was apparent in the received signal. Next,
pulses were transmitted between a projector and hydrophone in deep water with the 'Munk' sound
speed profile using paths generated by Bellhop. The impact of spatially-varying sound speed was
noted.
Reference
Urick, Robert. Principles of Underwater Sound. Los Altos, California: Peninsula Publishing, 1983.
17-533
17 Featured Examples
In this example, the acoustic beacon is located at the bottom of a shallow water channel, which is 200
meters deep. A passive array is towed beneath the surface to locate the beacon.
First, create a multipath channel to transmit the signal between the beacon and passive array.
Consider ten propagation paths including the direct path and reflections from the top and bottom
surfaces. The paths generated by isopaths will be used by the multipath channel, channel, to
simulate the signal propagation.
propSpeed = 1520;
channelDepth = 200;
OperatingFrequency = 37.5e3;
isopaths = phased.IsoSpeedUnderwaterPaths('ChannelDepth',channelDepth,...
'NumPathsSource','Property','NumPaths',10,'PropagationSpeed',propSpeed);
channel = phased.MultipathChannel('OperatingFrequency',OperatingFrequency);
Define the waveform emitted by the acoustic beacon. The waveform is a rectangular pulse having a 1
second repetition interval and 10 millisecond width.
prf = 1;
pulseWidth = 10e-3;
pulseBandwidth = 1/pulseWidth;
fs = 2*pulseBandwidth;
wav = phased.RectangularWaveform('PRF',prf,'PulseWidth',pulseWidth,...
'SampleRate',fs);
channel.SampleRate = fs;
17-534
Locating an Acoustic Beacon with a Passive Sonar System
Acoustic Beacon
Next, define the acoustic beacon, which is located 1 meter above the bottom of the channel. The
acoustic beacon is modeled as an isotropic projector. The acoustic beacon waveform will be radiated
to the far field.
projector = phased.IsotropicProjector('VoltageResponse',120);
projRadiator = phased.Radiator('Sensor',projector,...
'PropagationSpeed',propSpeed,'OperatingFrequency',OperatingFrequency);
A passive towed array will detect and localize the source of the pings, and is modeled as a five-
element linear array with half-wavelength spacing. The passive array has velocity of 1 m/s in the y-
direction. The array axis is oriented parallel to the direction of travel.
hydrophone = phased.IsotropicHydrophone('VoltageSensitivity',-150);
array = phased.ULA('Element',hydrophone,...
'NumElements',5,'ElementSpacing',propSpeed/OperatingFrequency/2,...
'ArrayAxis','y');
arrayCollector = phased.Collector('Sensor',array,...
'PropagationSpeed',propSpeed,'OperatingFrequency',OperatingFrequency);
Define the receiver amplifier for each hydrophone element. Choose a gain of 20 dB and noise figure
of 10 dB.
rx = phased.ReceiverPreamp(...
'Gain',20,...
'NoiseFigure',10,...
'SampleRate',fs,...
'SeedSource','Property',...
'Seed',2007);
Activate the acoustic beacon and transmit ten pings. After the propagation delay, the pings appear as
a peaks in the received signals of the array.
x = wav();
numTransmits = 10;
rxsig = zeros(size(x,1),5,numTransmits);
for i = 1:numTransmits
17-535
17 Featured Examples
end
Plot the last received pulse. Because of the multiple propagation paths, each ping is a superposition
of multiple pulses.
t = (0:length(x)-1)'/fs;
plot(t,rxsig(:,end))
xlabel('Time (s)');
ylabel('Signal Amplitude (V)')
Estimate the direction of arrival of the acoustic beacon with respect to the array. Create a MUSIC
estimator object, specifying a single source signal and the direction of arrival as an output. Use a
scan angle grid with 0.1 degree spacing.
17-536
Locating an Acoustic Beacon with a Passive Sonar System
musicspatialspect = phased.MUSICEstimator('SensorArray',array,...
'PropagationSpeed',propSpeed,'OperatingFrequency',...
OperatingFrequency,'ScanAngles',-90:0.1:90,'DOAOutputPort',true,...
'NumSignalsSource','Property','NumSignals',1);
Next, collect pings for 500 more repetition intervals. Estimate the direction of arrival for each
repetition interval, and compare the estimates to the true direction of arrival.
numTransmits = 500;
angPassive = zeros(numTransmits,1);
angAct = zeros(numTransmits,1);
for i = 1:numTransmits
rxsig = rx(rsig);
end
Plot the estimated arrival angles and the true directions of arrival for each pulse repetition interval.
plot([angPassive angAct])
xlabel('Pulse Number')
ylabel('Arrival angle (degrees)')
legend('Estimated DOA','Actual DOA')
17-537
17 Featured Examples
The estimated and actual directions of arrival agree to within less than one degree.
Summary
In this example, the transmission of acoustic pings between a beacon and passive array was
simulated in a shallow-water channel. Each ping was received along ten acoustic paths. The direction
of arrival of the beacon was estimated with respect to the passive array for each received ping and
compared to the true direction of arrival. The direction of arrival could be used to locate and recover
the beacon.
Reference
Urick, Robert. Principles of Underwater Sound. Los Altos, California: Peninsula Publishing, 1983.
17-538
Interference Mitigation Using Frequency Agility Techniques
Introduction
In this model, a phased array radar is designed to detect an approaching aircraft. The aircraft is
equipped with a jammer which can intercept the radar signal and transmit a spoofing signal back to
confuse the radar. On the other hand, the radar system is capable of transmitting waveform with
different operating frequencies to mitigate the jamming effect. The model includes blocks for
waveform generation, signal propagation, and radar signal processing. It shows how the radar and
jammer interact and gain advantage over each other.
The radar is operating at 300 MHz with a sampling rate of 2 MHz. The radar is located at the origin
and is assumed to be stationary. The target is located about 10 km away and approaching at
approximately 100 meters per second.
Waveform Generation
The Waveform Generation subsystem includes a linear FM (LFM) waveform generator. By varying the
input to the waveform generator a frequency hopped waveform at a shifted center frequency is
genearted. Therefore, the radar system is capable of switching transmit waveform either following a
fixed schedule or when it detects jamming signals. This example assumes that the waveform can be
generated for two different frequencies, referred to as center band and hopped band. The center
17-539
17 Featured Examples
band is the subband around the carrier frequency and the hopped band is the subband located
quarter bandwidth above the carrier.
Propagation Channels
The signal propagation is modeled for both the forward channel and the return channel. Once the
transmitted signal hits the target aircraft, the reflected signal travels back to the radar system via the
return channel. In addition, the jammer analyzes the incoming signal and send back a jamming signal
to confuse the radar system. That jamming signal is also propagated through the return channel.
Because different signals may occupy different frequency bands, wideband propagation channels are
used.
Signal Processing
The radar receives both the target return and the jamming signal. Upon receiving the signal, a series
filters are used to extract the signal from different bands. In this example, there are two of them to
extract the signal from the center band and the hopped band. The signal in each band then passes
through the corresponding matched filter to improve the SNR and be ready for detection.
Several dialog parameters of the model are calculated by the helper function
helperslexFrequencyAgilityParam. To open the function from the model, click on Modify
Simulation Parameters block. This function is executed once when the model is loaded. It exports to
the workspace a structure whose fields are referenced by the dialogs. To modify any parameters,
either change the values in the structure at the command prompt or edit the helper function and
rerun it to update the parameter structure.
First run the model for the case when there is no jamming signal. The scopes shows that there is one
strong echo in the center band with a delay of approximately 67 microseconds, which corresponds to
17-540
Interference Mitigation Using Frequency Agility Techniques
a target range of 10 km. Therefore, the target is correctly detected. Meanwhile, there is no return
detected in the hopped band.
The spectrum analyzer shows that the received signal occupies the center band.
17-541
17 Featured Examples
Now enable jammer by clicking the Jammer Switch block. In this situation, the target intercepts the
signal, amplify it, and then sends it back with a delay corresponding to a different range. As a result,
the scope now shows two returns in the center band. The true target return is still at the old position,
but the ghost return generated by the jammer appears stronger and closer to the radar so the radar
is likely to be confused and assign precious resource to track this fake target.
17-542
Interference Mitigation Using Frequency Agility Techniques
Note that both the jammer signal and the target return are in the center band, as shown in the
spectrum analyzer.
17-543
17 Featured Examples
If the radar has a pre-scheduled frequency hopping schedule or is smart enough to know that it might
have been confused by a jamming signal, it could switch to a different frequency band to operate.
Such scenario can be simulated by clicking the Hop Switch block so the radar signal is transmitted in
the hopped band.
17-544
Interference Mitigation Using Frequency Agility Techniques
Because the radar now operates in the hopped band, the target echo is also in the hopped band. From
the scope, the target echo is at the appropriate delay in the hopped band. Meanwhile, the jammer
hasn't figured out the radar's new operating band yet so the jamming signal still appears in the
center band. Yet the jamming signal can no longer fool the radar.
17-545
17 Featured Examples
The spectrum analyzer shows that the received signal now occupies two bands.
Summary
This models a radar system detecting a target equipped with a jammer. It shows how frequency
agility techniques can be used to mitigate the jamming effect.
17-546
Signal Parameter Estimation in a Radar Warning Receiver
Introduction
An RWR is a passive electronic warfare support system [1] that provides timely information to the
pilot about its RF signal environment. The RWR intercepts an impinging signal, and uses signal
processing techniques to extract information about the intercepted waveform characteristics, as well
as the location of the emitter. This information can be used to invoke counter-measures, such as
jamming to avoid being detected by the radar. The interaction between the radar and the aircraft is
depicted in the following diagram.
17-547
17 Featured Examples
In this example, we simulate a scenario where a ground surveillance radar and an airplane with an
RWR present. The RWR detects the radar signal and extracts the following waveform parameters
from the intercepted signal:
The RWR chain consists of a phased array antenna, a channelized receiver, an envelope detector, and
a signal processor. The frequency band of the intercepted signal is estimated by the channelized
receiver and the envelope detector, following which the detected sub-banded signal is fed to the
signal processor. Beam steering is applied towards the direction of arrival of this sub-banded signal,
and the waveform parameters are estimated using pseudo Wigner-Ville transform in conjunction with
Hough transform. Using angle of arrival and single-baseline approach, the location of the emitter is
also estimated.
Scenario Setup
Assume the ground based surveillance radar operates in the L band, and transmits chirp signals of 3
μs duration at a pulse repetition interval of 15 μs. Bandwidth of the transmitted chirp is 30 MHz, and
the carrier frequency is 1.8 GHz. The surveillance radar is located at the origin and is stationary, and
the aircraft is flying at a constant speed of 200 m/s (~0.6 Mach).
The transmit antenna of the radar is a 8x8 uniform rectangular phased array, having a spacing of λ/2
between its elements. The signal propagates from the radar to the aircraft and is intercepted and
analyzed by the RWR. For simplicity, the waveform is chosen as a linear FM waveform with a peak
power of 100 W.
% Configure the LFM waveform using the waveform parameters defined above
wavGen= phased.LinearFMWaveform('SampleRate',fs,'PulseWidth',T,'SweepBandwidth',BW,'PRF',PRF);
17-548
Signal Parameter Estimation in a Radar Warning Receiver
The ground surveillance radar is unaware of the direction of the target, therefore, it needs to scan
the entire space to look for aircraft. In general, the radar will transmit a series of pulses at each
direction before moving to the next direction. Therefore, without losing generality, this example
assumes that the radar is transmitting toward zero degrees azimuth and elevation. The following
figure shows the time frequency representation of a 4-pulse train arrived at the aircraft. Note that
although the pulse train arrives at a specific delay, the time delay of the arrival of the first pulse is
irrelevant for the RWR because it has no knowledge transmit time and has to constantly monitor its
environment
17-549
17 Featured Examples
The RWR is equipped with a 10x10 uniform rectangular array with a spacing of λ/2 between its
elements. It operates in the entire L-band, with a center frequency of 2 GHz. The RWR listens to the
environment, and continuously feeds the collected data into the processing chain.
The envelope detector in the RWR is responsible for detecting the presence of any signal. As the RWR
is continuously receiving data, the receiver chain buffers and truncates the received data into 50 μs
segments.
Since the RWR has no knowledge about the exact center frequency used in the transmit waveform, it
first uses a bank of filters, each tuned to a slightly different RF center frequency, to divide the
received data into subbands. Then the envelope detector is applied in each band to check whether a
signal presents. In this example, the signal is divided into sub-bands of 100 MHz bandwidth. An
added benefit for such operation is that instead of sampling the entire bandwidth covered by the
RWR, the signal in each subband can be down-sampled to a sampling frequency of 100 MHz.
The plot below shows the first four band created by the filter bank.
% Visualize the first four filters created in the filter bank of the
% channelizer
freqz(channelizer, 1:4)
title('Zoomed Channelizer response for first four filters')
xlim([0 0.2])
17-550
Signal Parameter Estimation in a Radar Warning Receiver
The received data, subData, has 3 dimensions. The first dimension represents the fast-time, the
second dimension represents the sub-bands, and the third dimension corresponds to the receiving
elements of the receiving array. For the RWR's 10x10 antenna configuration used in this example, we
have 100 receiving elements. Because the transmit power is low and the receiver noise is high, the
radar signal is indistinguishable from the noise. Therefore the received power are summed across
these elements to enhance the signal-to-nose ratio (SNR) and get a better estimates of the power in
each subband. The band that has the maximum power is the one used by the radar.
17-551
17 Featured Examples
Although the power in the selected band is higher compared to the neighboring band, the SNR within
the band is still low, as shown in the following figure.
subData = (subData(:,detInd,:));
subData = squeeze(subData); %adjust the data to 2-D matrix
17-552
Signal Parameter Estimation in a Radar Warning Receiver
% Find the original starting frequency of the sub-band having the detected
% signal
detfBand = fs*(detInd-1)/(fs/stepFreq);
The subData is now a two-dimensional matrix. The first dimension represents fast-time samples and
the second dimension is the data across 100 receiving antenna channels. The detected sub-band
starting frequency is calculated to find the carrier frequency of the detected signal.
The next step for the RWR is to find the direction from which the radio waves are arriving. This angle
of arrival information would be used to steer the receive antenna beam in the direction of the emitter,
and locate the emitter on the ground using single base-line approach. The RWR estimates the
direction of arrival using a two dimensional MUSIC estimator. Beam steering is done using phase-
shift beamformer to achieve maximum SNR of the signal, thus help the waveform parameter
extraction.
Assume that ground plane is flat and parallel to the xy-plane of the coordinate system. such, the RWR
can use the altitude information from its altimeter readings of the aircraft along with the direction of
arrival to triangulate the location of the emitter.
% Configure the MUSIC Estimator to find the direction of arrival of the
% signal
doaEst = phased.MUSICEstimator2D('OperatingFrequency',fc,'PropagationSpeed',c,...
'SensorArray',antennaRx,'DOAOutputPort',true,'AzimuthScanAngles',-50:.5:50,...
'ElevationScanAngles',-50:.5:50, 'NumSignalsSource', 'Property','NumSignals', 1);
17-553
17 Featured Examples
[mSpec,doa] = doaEst(subData);
plotSpectrum(doaEst,'Title','2-D MUSIC Spatial Spectrum Top view');
view(0,90); axis([-30 0 -30 0]);
% Configure the beamformer object to steer the beam before combining the
% channels
beamformer = phased.PhaseShiftBeamformer('SensorArray',antennaRx,...
'OperatingFrequency',fc,'DirectionSource','Input port');
After applying the beam steering, the antenna has the maximum gain in the azimuth and elevation
angle of arrival of the signal. This further improves the SNR of the intercepted signal. Next, the
signal parameters are extracted in the signal processor using one of the time-frequency analysis
techniques known as pseudo Wigner-Ville transform coupled with Hough transform as described in
[2].
First, derive the time frequency representation of the intercepted signal using Wigner-Ville transform.
17-554
Signal Parameter Estimation in a Radar Warning Receiver
Using human eyes, even though the resulting time frequency representation is noisy, it is not too hard
to separate the signal from the background. Each pulse appears as a line in the time frequency plane.
Thus, using beginning and end of the time-frequency lines, we can derive the pulse width and the
bandwidth of the pulse. Similarly, the time between lines from different pulses gives us the pulse
repetition interval.
To do this automatically without relying on human eyes, we use Hough transform to identify those
lines from the image. The Hough transform can perform well in the presence of noise, and is an
enhancement to the time-frequency signal analysis method.
To use Hough transform, it is necessary to convert the time frequency image into a binary image.
Next code snippet performs some data smoothing on the image and then use imbinarize to do the
conversion. The conversion threshold can be modified based on the signal-noise characteristics of the
receiver and the operating environment.
% Normalize the pseudo Wigner-Ville image
twvNorm = abs(tpwv)./max(abs(tpwv(:)));
17-555
17 Featured Examples
Using Hough transform, the binary pseudo Wigner-Ville image are first transformed to peaks. This
way, instead of detecting the line in an image, we just need to detect a peak in an image.
17-556
Signal Parameter Estimation in a Radar Warning Receiver
Using these positions, houghlines can reconstruct the lines in the original binary image. Then as
discussed earlier, the beginning and the end of these lines help us estimate the waveform parameters.
lines = houghlines(BW,T,R,P,'FillGap',3e-6*fs,'MinLength',1e-6*fs);
coord = [lines(:).point1; lines(:).point2];
17-557
17 Featured Examples
The extracted waveform characteristics are listed below. They match the truth very well. These
estimates can then be used to catalog the radar and prepare for counter measures if necessary.
Summary
This demo shows how an RWR can estimate the parameters of the intercepted radar signal using
signal processing and image processing techniques.
17-558
Signal Parameter Estimation in a Radar Warning Receiver
References
[1] Electronic Warfare and Radar Systems Engineering Handbook 2013, Naval Air Warfare Center
Weapons Division, Point Mugu, California.
[2] Daniel L. Stevens, Stephanie A. Schuckers, Detection and Parameter Extraction of Low Probability
of Intercept Radar Signals using the Hough Transform . Global Journal of Research In Engineering
Vol 15 Issue 6, Jan. 2016
17-559
17 Featured Examples
The example employs a scattering-based spatial channel model which accounts for the transmit/
receive spatial locations and antenna patterns. A simpler static-flat MIMO channel is also offered for
link validation purposes.
The example requires Communications Toolbox™ and Phased Array System Toolbox™.
Introduction
The ever-growing demand for high data rate and more user capacity increases the need to use the
available spectrum more efficiently. Multi-user MIMO (MU-MIMO) improves the spectrum efficiency
by allowing a base station (BS) transmitter to communicate simultaneously with multiple mobile
stations (MS) receivers using the same time-frequency resources. Massive MIMO allows the number
of BS antenna elements to be on the order of tens or hundreds, thereby also increasing the number of
data streams in a cell to a large value.
The next generation, 5G, wireless systems use millimeter wave (mmWave) bands to take advantage of
their wider bandwidth. The 5G systems also deploy large scale antenna arrays to mitigate severe
propagation loss in the mmWave band.
Compared to current wireless systems, the wavelength in the mmWave band is much smaller.
Although this allows an array to contain more elements within the same physical dimension, it
becomes much more expensive to provide one transmit-receive (TR) module, or an RF chain, for each
antenna element. Hybrid transceivers are a practical solution as they use a combination of analog
beamformers in the RF and digital beamformers in the baseband domains, with fewer RF chains than
the number of transmit elements [ 1 ].
This example uses a multi-user MIMO-OFDM system to highlight the partitioning of the required
precoding into its digital baseband and RF analog components at the transmitter end. Building on the
system highlighted in the “MIMO-OFDM Precoding with Phased Arrays” on page 17-430 example,
this example shows the formulation of the transmit-end precoding matrices and their application to a
MIMO-OFDM system.
s = rng(67); % Set RNG state for repeatability
System Parameters
Define system parameters for the example. Modify these parameters to explore their impact on the
system.
% Multi-user system with single/multiple streams per user
prm.numUsers = 4; % Number of users
prm.numSTSVec = [3 2 1 2]; % Number of independent data streams per user
prm.numSTS = sum(prm.numSTSVec); % Must be a power of 2
prm.numTx = prm.numSTS*8; % Number of BS transmit antennas (power of 2)
prm.numRx = prm.numSTSVec*4; % Number of receive antennas, per user (any >= numSTSVec)
17-560
Massive MIMO Hybrid Beamforming
prm.FFTLength = 256;
prm.CyclicPrefixLength = 64;
prm.numCarriers = 234;
prm.NullCarrierIndices = [1:7 129 256-5:256]'; % Guards and DC
prm.PilotCarrierIndices = [26 54 90 118 140 168 204 232]';
nonDataIdx = [prm.NullCarrierIndices; prm.PilotCarrierIndices];
prm.CarriersLocations = setdiff((1:prm.FFTLength)', sort(nonDataIdx));
numSTS = prm.numSTS;
numTx = prm.numTx;
numRx = prm.numRx;
numSTSVec = prm.numSTSVec;
codeRate = 1/3; % same code rate per user
numTails = 6; % number of termination tail bits
prm.numFrmBits = numSTSVec.*(prm.numDataSymbols*prm.numCarriers* ...
prm.bitsPerSubCarrier*codeRate)-numTails;
prm.modMode = 2^prm.bitsPerSubCarrier; % Modulation order
% Account for channel filter delay
numPadSym = 3; % number of symbols to zeropad
prm.numPadZeros = numPadSym*(prm.FFTLength+prm.CyclicPrefixLength);
Define transmit and receive arrays and positional parameters for the system.
prm.cLight = physconst('LightSpeed');
prm.lambda = prm.cLight/prm.fc;
17-561
17 Featured Examples
else
% Uniform Linear array
txarray = phased.ULA(numTx, 'ElementSpacing',0.5*prm.lambda, ...
'Element',phased.IsotropicAntennaElement('BackBaffled',false));
end
prm.posTxElem = getElementPosition(txarray)/prm.lambda;
spLoss = zeros(prm.numUsers,1);
prm.posRx = zeros(3,prm.numUsers);
for uIdx = 1:prm.numUsers
% Receive arrays
if isRxURA(uIdx)
% Uniform Rectangular array
rxarray = phased.PartitionedArray(...
'Array',phased.URA([expFactorRx(uIdx) numSTSVec(uIdx)], ...
0.5*prm.lambda),'SubarraySelection',ones(numSTSVec(uIdx), ...
numRx(uIdx)),'SubarraySteering','Custom');
prm.posRxElem = getElementPosition(rxarray)/prm.lambda;
else
if numRx(uIdx)>1
% Uniform Linear array
rxarray = phased.ULA(numRx(uIdx), ...
'ElementSpacing',0.5*prm.lambda, ...
'Element',phased.IsotropicAntennaElement);
prm.posRxElem = getElementPosition(rxarray)/prm.lambda;
else
rxarray = phased.IsotropicAntennaElement;
prm.posRxElem = [0; 0; 0]; % LCS
end
end
% Mobile positions
[xRx,yRx,zRx] = sph2cart(deg2rad(prm.mobileAngles(1,uIdx)), ...
deg2rad(prm.mobileAngles(2,uIdx)), ...
prm.mobileRanges(uIdx));
prm.posRx(:,uIdx) = [xRx;yRx;zRx];
[toRxRange,toRxAng] = rangeangle(prm.posTx,prm.posRx(:,uIdx));
spLoss(uIdx) = fspl(toRxRange,prm.lambda);
end
For a spatially multiplexed system, availability of channel information at the transmitter allows for
precoding to be applied to maximize the signal energy in the direction and channel of interest. Under
the assumption of a slowly varying channel, this is facilitated by sounding the channel first. The BS
sounds the channel by using a reference transmission, that the MS receiver uses to estimate the
channel. The MS transmits the channel estimate information back to the BS for calculation of the
precoding needed for the subsequent data transmission.
The following schematic shows the processing for the channel sounding modeled.
17-562
Massive MIMO Hybrid Beamforming
For the chosen MIMO system, a preamble signal is sent over all transmitting antenna elements, and
processed at the receiver accounting for the channel. The receiver antenna elements perform pre-
amplification, OFDM demodulation, and frequency domain channel estimation for all links.
% Generate the preamble signal
prm.numSTS = numTx; % set to numTx to sound out all channels
preambleSig = helperGenPreamble(prm);
% OFDM demodulation
rxOFDM = ofdmdemod(rxPreSigAmp(chanDelay(uIdx)+1: ...
end-(prm.numPadZeros-chanDelay(uIdx)),:),prm.FFTLength, ...
prm.CyclicPrefixLength,prm.CyclicPrefixLength, ...
prm.NullCarrierIndices,prm.PilotCarrierIndices);
end
17-563
17 Featured Examples
For a multi-user system, the channel estimate is fed back from each MS, and used by the BS to
determine the precoding weights. The example assumes perfect feedback with no quantization or
implementation delays.
Hybrid Beamforming
The example uses the orthogonal matching pursuit (OMP) algorithm [ 3 ] for a single-user system and
the joint spatial division multiplexing (JSDM) technique [ 2, 4 ] for a multi-user system, to determine
the digital baseband Fbb and RF analog Frf precoding weights for the selected system configuration.
For a single-user system, the OMP partitioning algorithm is sensitive to the array response vectors
At. Ideally, these response vectors account for all the scatterers seen by the channel, but these are
unknown for an actual system and channel realization, so a random set of rays within a 3-dimensional
space to cover as many scatterers as possible is used. The prm.nRays parameter specifies the
number of rays.
For a multi-user system, JSDM groups users with similar transmit channel covariance together and
suppresses the inter-group interference by an analog precoder based on the block diagonalization
method [ 5 ]. Here each user is assigned to be in its own group, thereby leading to no reduction in the
sounding or feedback overhead.
% Calculate the hybrid weights on the transmit side
if prm.numUsers==1
% Single-user OMP
% Spread rays in [az;el]=[-180:180;-90:90] 3D space, equal spacing
% txang = [-180:360/prm.nRays:180; -90:180/prm.nRays:90];
txang = [rand(1,prm.nRays)*360-180;rand(1,prm.nRays)*180-90]; % random
At = steervec(prm.posTxElem,txang);
AtExp = complex(zeros(prm.numCarriers,size(At,1),size(At,2)));
for carrIdx = 1:prm.numCarriers
AtExp(carrIdx,:,:) = At; % same for all sub-carriers
end
else
% Multi-user Joint Spatial Division Multiplexing
[Fbb,mFrf] = helperJSDMTransmitWeights(hDp,prm);
end
17-564
Massive MIMO Hybrid Beamforming
For the wideband OFDM system modeled, the analog weights, mFrf, are the averaged weights over
the multiple subcarriers. The array response pattern shows distinct data streams represented by the
stronger lobes. These lobes indicate the spread or separability achieved by beamforming. The
“Introduction to Hybrid Beamforming” on page 17-423 example compares the patterns realized by
the optimal, fully digital approach, with those realized from the selected hybrid approach, for a
single-user system.
Data Transmission
The example models an architecture where each data stream maps to an individual RF chain and
each antenna element is connected to each RF chain. This is shown in the following diagram.
17-565
17 Featured Examples
Next, we configure the system's data transmitter. This processing includes channel coding, bit
mapping to complex symbols, splitting of the individual data stream to multiple transmit streams,
baseband precoding of the transmit streams, OFDM modulation with pilot mapping and RF analog
beamforming for all the transmit antennas employed.
% Convolutional encoder
encoder = comm.ConvolutionalEncoder( ...
'TrellisStructure',poly2trellis(7,[133 171 165]), ...
'TerminationMethod','Terminated');
% Multi-antenna pilots
pilots = helperGenPilots(prm.numDataSymbols,numSTS);
17-566
Massive MIMO Hybrid Beamforming
For the selected, fully connected RF architecture, each antenna element uses prm.numSTS phase
shifters, as given by the individual columns of the mFrf matrix.
The processing for the data transmission and reception modeled is shown below.
Signal Propagation
The example offers an option for spatial MIMO channel and a simpler static-flat MIMO channel for
validation purposes.
The scattering model uses a single-bounce ray tracing approximation with a parametrized number of
scatterers. For this example, the number of scatterers is set to 100. The 'Scattering' option models
the scatterers placed randomly within a sphere around the receiver, similar to the one-ring model
[ 6 ].
The channel models allow path-loss modeling and both line-of-sight (LOS) and non-LOS propagation
conditions. The example assumes non-LOS propagation and isotropic antenna element patterns with
linear or rectangular geometry.
17-567
17 Featured Examples
The same channel is used for both sounding and data transmission. The data transmission has a
longer duration and is controlled by the number of data symbols parameter, prm.numDataSymbols.
The channel evolution between the sounding and transmission stages is modeled by prepending the
preamble signal to the data signal. The preamble primes the channel to a valid state for the data
transmission, and is ignored from the channel output.
The receiver modeled per user compensates for the path loss by amplification and adds thermal
noise. Like the transmitter, the receiver used in a MIMO-OFDM system contains many stages
including OFDM demodulation, MIMO equalization, QAM demapping, and channel decoding.
% OFDM demodulation
rxOFDM = ofdmdemod(rxSigAmp(chanDelay(uIdx)+1: ...
end-(prm.numPadZeros-chanDelay(uIdx)),:),prm.FFTLength, ...
prm.CyclicPrefixLength,prm.CyclicPrefixLength, ...
prm.NullCarrierIndices,prm.PilotCarrierIndices);
% MIMO equalization
% Index into streams for the user of interest
[rxEq,CSI] = helperMIMOEqualize(rxOFDM(:,numSTS+1:end,:),hD(:,stsIdx,:));
% Soft demodulation
rxSymbs = rxEq(:)/sqrt(numTx);
rxLLRBits = qamdemod(rxSymbs,prm.modMode,'UnitAveragePower',true, ...
'OutputType','approxllr','NoiseVariance',nVar);
17-568
Massive MIMO Hybrid Beamforming
User 1
RMS EVM (%) = 0.38361
BER = 0.00000; No. of Bits = 9354; No. of errors = 0
User 2
RMS EVM (%) = 1.0311
BER = 0.00000; No. of Bits = 6234; No. of errors = 0
User 3
RMS EVM (%) = 2.1462
BER = 0.00000; No. of Bits = 3114; No. of errors = 0
User 4
RMS EVM (%) = 1.0024
BER = 0.00000; No. of Bits = 6234; No. of errors = 0
17-569
17 Featured Examples
For the MIMO system modeled, the displayed receive constellation of the equalized symbols offers a
qualitative assessment of the reception. The actual bit error rate offers the quantitative figure by
comparing the actual transmitted bits with the received decoded bits per user.
The example highlights the use of hybrid beamforming for multi-user MIMO-OFDM systems. It allows
you to explore different system configurations for a variety of channel models by changing a few
system-wide parameters.
The set of configurable parameters includes the number of users, number of data streams per user,
number of transmit/receive antenna elements, array locations, and channel models. Adjusting these
parameters you can study the parameters' individual or combined effects on the overall system. As
examples, vary:
• the number of users, prm.numUsers, and their corresponding data streams, prm.numSTSVec, to
switch between multi-user and single-user systems, or
• the channel type, prm.ChanType, or
• the number of rays, prm.nRays, used for a single-user system.
• helperApplyMUChannel.m
17-570
Massive MIMO Hybrid Beamforming
• helperArrayInfo.m
• helperGenPreamble.m
• helperGenPilots.m
• helperJSDMTransmitWeights.m
• helperMIMOChannelEstimate.m
• helperMIMOEqualize.m
References
1 Molisch, A. F., et al. "Hybrid Beamforming for Massive MIMO: A Survey." IEEE®
Communications Magazine, Vol. 55, No. 9, September 2017, pp. 134-141.
2 Li Z., S. Han, and A. F. Molisch. "Hybrid Beamforming Design for Millimeter-Wave Multi-User
Massive MIMO Downlink." IEEE ICC 2016, Signal Processing for Communications Symposium.
3 El Ayach, Oma, et al. "Spatially Sparse Precoding in Millimeter Wave MIMO Systems." IEEE
Transactions on Wireless Communications, Vol. 13, No. 3, March 2014, pp. 1499-1513.
4 Adhikary A., J. Nam, J-Y Ahn, and G. Caire. "Joint Spatial Division and Multiplexing - The Large-
Scale Array Regime." IEEE Transactions on Information Theory, Vol. 59, No. 10, October 2013,
pp. 6441-6463.
5 Spencer Q., A. Swindlehurst, M. Haardt, "Zero-Forcing Methods for Downlink Spatial
Multiplexing in Multiuser MIMO Channels." IEEE Transactions on Signal Processing, Vol. 52, No.
2, February 2004, pp. 461-471.
6 Shui, D. S., G. J. Foschini, M. J. Gans and J. M. Kahn. "Fading Correlation and its Effect on the
Capacity of Multielement Antenna Systems." IEEE Transactions on Communications, Vol. 48, No.
3, March 2000, pp. 502-513.
17-571
17 Featured Examples
This model simulates a monostatic radar that searches for targets with an unambiguous range of 5
km. If the radar detects a target within 2 km, then it will switch to a higher PRF to only look for
targets with 2 km range and enhance its capability to detect high speed targets.
The system is very similar to what is used in the “End-to-End Monostatic Radar” on page 17-498
example with the following notable difference:
1 The waveform block is no longer a source block. Instead, it takes an input, idx, to select which
PRF to use. The available PRF values are specified in the PRF parameter of the waveform dialog.
2 Each time a waveform is transmitted, its corresponding PRF also sets the time when the next
pulse should be transmitted.
3 There is now a controller to determine which PRF to use for the next transmission. At the end of
the signal processing chain, the target range is estimated. The controller will use this
information to decide which PRF to choose for next transmission.
4 Once the model is compiled, notice that the signal passing through the system can vary in length
because of a possible change of the waveform PRF.
17-572
PRF Agility Based on Target Detection
5 The model takes advantage of the new controllable sample time so the system runs at the
appropriate time determined by the varying PRF values.
Several dialog parameters of the model are calculated by the helper function
helperslexPRFSchedulingSim. To open the function from the model, click on Modify Simulation
Parameters block. This function is executed once when the model is loaded. It exports to the
workspace a structure whose fields are referenced by the dialogs. To modify any parameters, either
change the values in the structure at the command prompt or edit the helper function and rerun it to
update the parameter structure.
The figure below shows the detected ranges of the targets. Target ranges are computed from the
round-trip time delay of the reflected signals from the targets. At the simulation start, the radar
detects two targets, one is slightly over 2 km away and the other one is at approximately 3.5 km
away.
17-573
17 Featured Examples
After some time, the first target moves into the 2 km zone and triggers a change of PRF. Then the
received signal only covers the range up to 2 km. The display is zero padded to ensure that the plot
limits do not change. Notice that the target at 3.5 km gets folded to the 1.5 km range due to range
ambiguity.
Summary
This example shows how to build a radar system in Simulink® that dynamically changes its PRF
based on the target detection range. A staggered PRF system can be modeled similarly.
17-574
Multicore Simulation of Monostatic Radar System
Introduction
The dataflow execution domain allows you to make use of multiple cores in the simulation of
computationally intensive systems. This example shows how dataflow as the execution domain of a
subsystem improves simulation performance of the model. To learn more about dataflow and how to
run Simulink models using multiple threads, see “Multicore Execution using Dataflow Domain”.
This example simulates a simple end-to-end monostatic radar. Rectangular pulses are amplified by the
transmitter block then propagated to and from a target in free-space. Noise and amplification are
then applied in the receiver preamp block to the return signal, followed by a matched filter. Range
losses are compensated for and the pulses are noncoherently integrated.
This example uses dataflow domain in Simulink to make use of multiple cores on your desktop to
improve simulation performance. The Domain parameter of the Dataflow Subsystem in this model is
set as Dataflow. You can view this by selecting the subsystem and then selecting View>Property
Inspector. Dataflow domains automatically partition your model and simulate the system using
multiple threads for better simulation performance. Once you set the Domain parameter to Dataflow,
you can use Dataflow Simulation Assistant to analyze your model to get better performance. You can
open the Dataflow Simulation Assistant, by clicking on the Dataflow assistant button below the
Automatic frame size calculation parameter in Property Inspector.
17-575
17 Featured Examples
17-576
Multicore Simulation of Monostatic Radar System
The Dataflow Simulation Assistant suggests changing model settings for optimal simulation
performance. To accept the proposed model settings, next to Suggested model settings for
simulation performance, click Accept all. Alternatively, you can expand the section to change the
settings individually. In this example the model settings are already optimal. In the Dataflow
Simulation Assistant, click the Analyze button to start the analysis of the dataflow domain for
simulation performance. Once the analysis is finished, the Dataflow Simulation Assistant shows how
many threads the dataflow subsystem will use during simulation.
After analyzing the model, the assistant shows one thread because the data dependency between the
blocks in the model prevents blocks from being executed concurrently. By pipelining the data
dependent blocks, the Dataflow Subsystem can increase concurrency for higher data throughput.
Dataflow Simulation Assistant shows the recommended number of pipeline delays as Suggested
Latency. The suggested latency value is computed to give the best performance.
17-577
17 Featured Examples
The following diagram shows the Dataflow Simulation Assistant where the Dataflow Subsystem
currently specifies a latency value of zero, and the suggested latency for the system is four.
Click the Accept button next to Suggested Latency in the Dataflow Simulation Assistant to use the
recommended latency for the Dataflow Subsystem.
Dataflow Simulation Assistant now shows the number of threads as four meaning that the blocks
inside the dataflow subsystem simulate in parallel using four threads. Use of four pipeline delays
increased the number of blocks that can be run in parallel inside Dataflow Subsystem. Latency value
can also be entered directly in the Property Inspector for "Latency" parameter. Simulink shows the
latency parameter value using tags at the output ports of the dataflow subsystem.
17-578
Multicore Simulation of Monostatic Radar System
We measure the performance improvement of using dataflow domain by comparing the execution
time taken for running the model with and without using dataflow. Execution time is measured using
the sim command, which returns the simulation execution time of the model. These numbers and
analysis were published on a Windows desktop computer with Intel® Xeon® CPU W-2133 @ 3.6 GHz
6 Cores 12 Threads processor.
Summary
This example shows how dataflow execution domain can improve performance in simulation of a
radar system by using multiple cores on the desktop.
17-579
17 Featured Examples
Introduction
The dataflow execution domain allows you to make use of multiple cores in the simulation of
computationally intensive systems. This example shows how dataflow as the execution domain of a
subsystem improves simulation performance of the model. To learn more about dataflow and how to
run Simulink models using multiple threads, see “Multicore Execution using Dataflow Domain”.
Acoustic Beamforming
This example shows acoustic beamforming using a uniform linear array (ULA) of microphones. The
model simulates the reception of three audio signals from different directions on a 10-element
uniformly spaced linear microphone array. After the addition of thermal noise at the receiver,
beamforming is applied for different source angles and the result is played on a sound device. The
audio source that needs to be played in the audio player can be selected using the dialog from the
Select Source block.
This example uses dataflow domain in Simulink to make use of multiple cores on your desktop to
improve simulation performance. The Domain parameter of the Dataflow Subsystem in this model is
set as Dataflow. You can view this by selecting the subsystem and then selecting View>Property
Inspector. Dataflow domains automatically partition your model and simulate the system using
multiple threads for better simulation performance. Once you set the Domain parameter to Dataflow,
you can use Dataflow Simulation Assistant to analyze your model to get better performance. You can
17-580
Multicore Simulation of Audio Beamforming System
open the Dataflow Simulation Assistant, by clicking on the Dataflow assistant button below the
Automatic frame size calculation parameter in Property Inspector.
17-581
17 Featured Examples
The Dataflow Simulation Assistant suggests changing model settings for optimal simulation
performance. To accept the proposed model settings, next to Suggested model settings for
simulation performance, click Accept all. Alternatively, you can expand the section to change the
settings individually. In this example the model settings are already optimal. In the Dataflow
Simulation Assistant, click the Analyze button to start the analysis of the dataflow domain for
simulation performance. Once the analysis is finished, the Dataflow Simulation Assistant shows how
many threads the dataflow subsystem will use during simulation.
After analyzing the model, the assistant shows three threads. This is because the three beamformer
blocks are computationally intensive and can run in parallel. The three beamformer blocks however,
depend on the Microphone Array and the Receiver blocks. Pipeline delays can be used to break this
dependency and increase concurrency. The Dataflow Simulation Assistant shows the recommended
number of pipeline delays as Suggested Latency. The suggested latency value is computed to give the
best performance.
17-582
Multicore Simulation of Audio Beamforming System
The following diagram shows the Dataflow Simulation Assistant where the Dataflow Subsystem
currently specifies a latency value of zero, and the suggested latency for the system is one. Click the
Accept button next to Suggested Latency in the Dataflow Simulation Assistant to use the
recommended latency for the Dataflow Subsystem. Latency value can also be entered directly in the
Property Inspector for "Latency" parameter. Simulink shows the latency parameter value using
tags at the output ports of the dataflow subsystem.
We measure the performance improvement of using dataflow domain by comparing the execution
time taken for running model with and without using dataflow. Execution time is measured using the
sim command, which returns the simulation execution time of the model. These numbers and analysis
were published on a Windows desktop computer with Intel® Xeon® CPU W-2133 @ 3.6 GHz 6 Cores
12 Threads processor.
Summary
This example shows how multithreading using dataflow domain can improve performance in a
monostatic radar system simulation model using multiple cores on the desktop.
17-583
17 Featured Examples
Radar Definition
A well-known weather radar is the Weather Surveillance Radar, 1988 Doppler (WSR-88D), also known
as NEXRAD, which is operated by the US National Weather Service, FAA and DoD. For more
information, see the NEXRAD Radar Operations Center website.
To translate these requirements to radar parameters, we follow the process within the example
“Designing a Basic Monostatic Pulse Radar” on page 17-449. In this example, for the sake of
simplicity, load precalculated radar parameters.
load NEXRAD_Parameters.mat
Antenna Pattern
As NEXRAD is polarimetric, modeling the polarimetric characteristics of the antenna and weather
targets is important. According to NEXRAD specifications, the antenna pattern has a beamwidth of
about 1 degree and first sidelobe below -30 dB.
azang = [-180:0.5:180];
elang = [-90:0.5:90];
% We synthesize a pattern using isotropic antenna elements and tapering the
% amplitude distribution to make it follow NEXRAD specifications.
magpattern = load('NEXRAD_pattern.mat');
phasepattern = zeros(size(magpattern.pat));
% The polarimetric antenna is assumed to have ideally matched horizontal
% and vertical polarization pattern.
antenna = phased.CustomAntennaElement('AzimuthAngles',azang,...
'ElevationAngles',elang,...
'HorizontalMagnitudePattern',magpattern.pat,...
'VerticalMagnitudePattern',magpattern.pat,...
'HorizontalPhasePattern',phasepattern,...
'VerticalPhasePattern',phasepattern,...
'SpecifyPolarizationPattern',true);
17-584
Simulating a Polarimetric Radar Return for Weather Observation
clear magpattern
clear phasepattern
D = pattern(antenna,fc,azang,0);
P = polarpattern(azang,D,'TitleTop','Polar Pattern for Azimuth Cut (elevation angle = 0 degree)')
P.AntennaMetrics = 1;
removeAllCursors(P);
radiator = phased.Radiator(...
'Sensor',antenna,'Polarization','Dual',...
'OperatingFrequency',fc);
collector = phased.Collector(...
'Sensor',antenna,'Polarization','Dual',...
'OperatingFrequency',fc);
Weather Target
Generally, weather radar data is categorized into three levels. Level-I data is raw time series I/Q data
as input to the signal processor in the Radar Data Acquisition unit. Level-II data consists of the radar
spectral moments (reflectivity, mean radial velocity, and spectrum width) and polarimetric moments
(differential reflectivity, correlation coefficient, and differential phase) output from the signal
17-585
17 Featured Examples
processor. Level-III data is the output product data of the radar product generator, such as
hydrometeor classification, storm total precipitation, and tornadic vortex signature.
In this example, Level-II data from KTLX NEXRAD radar at 20:08:11 UTC on May 20th, 2013 is used.
This data comes from an intense tornado that occurred in Moore, Oklahoma and is used to generate
mean radar cross section (RCS) of equivalent scattering centers. The data is available via FTP
download. It represents a volume scan that includes a series of 360-degree sweeps of the antenna at
predetermined elevation angles completed in a specified period of time. In the data file name
KTLX20130520_200811_V06, KTLX refers to the radar site name, 20130520_200811 refers to the
date and time when the data was collected, and V06 refers to the data format of version 6. In this
simulation, the lowest elevation cut (0.5 degree) is extracted from the volume scan data as an
example.
Read the Level-II data into the workspace. Store it in the nexrad structure array, which contains all
the radar moments as well as an azimuth field that specifies the azimuth angle for each radial data
point in the Cartesian coordinate system. For simplicity, load NEXRAD data that was transformed
from a compressed file to a MAT-file.
load NEXRAD_data.mat;
Define an area of interest (AOI) in terms of azimuth and range in Cartesian coordinates.
Because weather targets are polarimetric and distributed in a plane, they can be represented by
specifying scattering matrices at discrete azimuth angles. Weather target reflectivity is defined as the
mean backscattering cross section per unit volume. Based on the weather radar equation, weather
targets can be considered as a collection of small targets within each resolution volume. The overall
reflectivity can be transformed to the mean RCS and regarded as an equivalent scattering center. As
a result, each element in the scattering matrix is the square root of RCS in relevant polarization.
17-586
Simulating a Polarimetric Radar Return for Weather Observation
svvpat = zeros(2,2,Ns);
shvpat = zeros(2,2,Ns);
zz = 0;
% NEXRAD beamwidth is about 1 degree.
beamwidth = 1.0;
for ii = nexrad_aoi.rlow:nexrad_aoi.rup
theta = nexrad.azimuth(ii);
for jj = 1:num_bin
if isnan(nexrad.ZH(ii,jj))==0
zz = zz+1;
rpos = (jj-1)*range_res + blind_rg;
tpos = [rpos*cosd(theta);rpos*sind(theta);0];
tgtpos(:,zz) = tpos;
RCSH(zz) = helperdBZ2RCS(beamwidth,rpos,lambda,pulse_width,nexrad.ZH(ii,jj),prop_spee
shhpat(:,:,zz) = sqrt(RCSH(zz))*ones(2,2);
RCSV(zz) = helperdBZ2RCS(beamwidth,rpos,lambda,pulse_width,nexrad.ZV(ii,jj),prop_spee
svvpat(:,:,zz) = sqrt(RCSV(zz))*ones(2,2);
end
end
end
tgtmotion = phased.Platform('InitialPosition',tgtpos,'Velocity',tgtvel);
target = phased.BackscatterRadarTarget('EnablePolarization',true,...
'Model','Nonfluctuating','AzimuthAngles',azpatangs,...
'ElevationAngles',elpatangs,'ShhPattern',shhpat,'ShvPattern',shvpat,...
'SvvPattern',svvpat,'OperatingFrequency',fc);
Generate a radar data cube using the defined radar system parameters. Within each resolution
volume, include the appropriate correlation to ensure the resulting I/Q data presents proper weather
signal statistical properties.
rxh_aoi = complex(zeros(nexrad_aoi.rgnum,nexrad_aoi.aznum));
rxv_aoi = complex(zeros(nexrad_aoi.rgnum,nexrad_aoi.aznum));
% The number of realization sequences
realiznum = 1000;
% The number of unusable range bins due to NEXRAD blind range
i0 = blind_rg/range_res;
% Rotate sensor platform to simulate NEXRAD scanning in azimuth
for kk = 1:nexrad_aoi.aznum
axes = rotz(nexrad.azimuth(kk+nexrad_aoi.r1-1));
% Update sensor and target positions
[sensorpos,sensorvel] = sensormotion(1/prf);
[tgtpos,tgtvel] = tgtmotion(1/prf);
% Form transmit beam for this scan angle and simulate propagation
pulse = waveform();
[txsig,txstatus] = transmitter(pulse);
% Adopt simultaneous transmission and reception mode as NEXRAD
txsig = radiator(txsig,txsig,tgtang,axes);
txsig = channel(txsig,sensorpos,tgtpos,sensorvel,tgtvel);
17-587
17 Featured Examples
ang_az = tgtang(1:2:end);
ang_az = ang_az+(-1).^(double(ang_az>0))*180;
tgtsig = target(txsig,[ang_az;zeros(size(ang_az))],axes);
% Matched filtering
[rxh, mfgainh] = matchedfilter(rxh);
[rxv, mfgainv] = matchedfilter(rxv);
rxh = [rxh(matchingdelay+1:end);zeros(matchingdelay,1)];
rxv = [rxv(matchingdelay+1:end);zeros(matchingdelay,1)];
% Decimation
rxh = rxh(1:2:end);
rxv = rxv(1:2:end);
clear txsig
clear tgtsig
Using pulse pair processing, calculate all the radar moments from estimates of correlations, including
reflectivity, mean radial velocity, spectrum width, differential reflectivity, correlation coefficient, and
differential phase.
moment = helperWeatherMoment(rxh_aoi,rxv_aoi,nexrad_aoi,pulnum,realiznum,prt,lambda);
Simulation Result
Compare the simulation result with the NEXRAD ground truth. Evaluate the simulated data quality
using error statistics, a sector image, a range profile, and a scatter plot. Error statistics are
expressed as the bias and standard deviation of the estimated radar moments compared to the
NEXRAD Level-II data (truth fields).
azimuth = nexrad.azimuth(nexrad_aoi.r1:nexrad_aoi.r2);
range = (nexrad_aoi.b1-1:nexrad_aoi.b2-1)*250 + 2000;
Reflectivity
Reflectivity, , is the zeroth moment of the Doppler spectrum and is related to liquid water content or
precipitation rate in the resolution volume. Because values of that are commonly encountered in
weather observations span many orders of magnitude, radar meteorologists use a logarithmic scale
given by as dBZ, where is in units of mm^6/m^3.
[Z_bias,Z_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'Z');
17-588
Simulating a Polarimetric Radar Return for Weather Observation
17-589
17 Featured Examples
Radial Velocity
Radial velocity, , is the first moment of the power-normalized spectra, which reflects the air motion
toward or away from the radar.
[Vr_bias,Vr_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'Vr');
17-590
Simulating a Polarimetric Radar Return for Weather Observation
17-591
17 Featured Examples
Spectrum Width
Spectrum width, , is the square root of the second moment of the normalized spectrum. The
spectrum width is a measure of the velocity dispersion, that is, shear or turbulence within the
resolution volume.
[sigmav_bias,sigmav_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'sigmav');
17-592
Simulating a Polarimetric Radar Return for Weather Observation
17-593
17 Featured Examples
Differential Reflectivity
Differential reflectivity, , is estimated from the ratio of the power estimates for the horizontal and
vertical polarization signals. The differential reflectivity is useful in hydrometeor classification.
[ZDR_bias,ZDR_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'ZDR');
17-594
Simulating a Polarimetric Radar Return for Weather Observation
17-595
17 Featured Examples
Correlation Coefficient
The correlation coefficient, , represents the consistency of the horizontal and vertical returned
power and phase for each pulse. The correlation coefficient plays an important role in determining
system performance and classifying radar echo types.
[Rhohv_bias,Rhohv_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'Rhohv');
17-596
Simulating a Polarimetric Radar Return for Weather Observation
17-597
17 Featured Examples
Differential Phase
The differential phase, , is the difference in the phase delay of the returned pulse from the
horizontal and vertical polarizations. The differential phase provides information on the nature of the
scatterers that are being sampled.
[Phidp_bias,Phidp_std] = helperDataQuality(nexrad_aoi,moment,range,azimuth,'Phidp');
17-598
Simulating a Polarimetric Radar Return for Weather Observation
17-599
17 Featured Examples
Error Statistics
Figures in previous section provide a visual qualitative measure of the simulation quality. This section
of the example shows the quantitative comparison of the estimates with NEXRAD specifications as
error statistics.
MomentName = {'Z';'Vr';'sigmav';'ZDR';'Rhohv';'Phidp'};
STDEV = [round(Z_std,2);round(Vr_std,2);round(sigmav_std,2);round(ZDR_std,2);round(Rhohv_std,3);r
Specs = [1;1;1;0.2;0.01;2];
Unit = {'dB';'m/s';'m/s';'dB';'';'degree'};
T = table(MomentName,STDEV,Specs,Unit);
disp(T);
By comparison, all the radar moment estimation meets NEXRAD specifications, which indicates good
data quality.
17-600
Simulating a Polarimetric Radar Return for Weather Observation
Summary
This example showed how to simulate the polarimetric Doppler radar return from an area of
distributed weather targets. Visual comparison and error statistics showed the estimated radar
moments met the NEXRAD ground truth specifications. With this example, you can further explore
the simulated time series data in other applications such as waveform design, system performance
study, and data quality evaluation for weather radar.
References
[1] Doviak, R and D. Zrnic. Doppler Radar and Weather Observations, 2nd Ed. New York: Dover, 2006.
[2] Zhang, G. Weather Radar Polarimetry. Boca Raton: CRC Press, 2016.
[3] Li, Z, S. Perera, Y. Zhang, G. Zhang, and R. Doviak. "Time-Domain System Modeling and
Applications for Multi-Function Array Radar Weather Measurements." 2018 IEEE Radar Conference
(RadarConf18), Oklahoma city, OK, 2018, pp. 1049-1054.
17-601
17 Featured Examples
Introduction
A moving target introduces a frequency shift in the radar return due to Doppler effect. However,
because most targets are not rigid bodies, there are often other vibrations and rotations in different
parts of the target in addition to the platform movement. For example, when a helicopter flies, its
blades rotate, or when a person walks, their arms swing naturally. These micro scale movements
produce additional Doppler shifts, referred to as micro-Doppler effects, which are useful in
identifying target features. This example shows two applications where micro-Doppler effects can be
helpful. In the first application, micro-Doppler signatures are used to determine the blade speed of a
helicopter. In the second application, the micro-Doppler signatures are used to identify a pedestrian
in an automotive radar return.
Consider a helicopter with four rotor blades. Assume the radar is located at the origin. Specify the
location of the helicopter as (500, 0, 500), which sets its distance away from the radar in meters and
a velocity of (60, 0, 0) m/s.
radarpos = [0;0;0];
radarvel = [0;0;0];
tgtinitpos = [500;0;500];
tgtvel = [60;0;0];
tgtmotion = phased.Platform('InitialPosition',tgtinitpos,'Velocity',tgtvel);
In this simulation, the helicopter is modeled by five scatterers: the rotation center and the tips of four
blades. The rotation center moves with the helicopter body. Each blade tip is 90 degrees apart from
17-602
Introduction to Micro-Doppler Effects
the tip of its neighboring blades. The blades are rotating at a constant speed of 4 revolutions per
second. The arm length of each blade is 6.5 meters.
Nblades = 4;
bladeang = (0:Nblades-1)*2*pi/Nblades;
bladelen = 6.5;
bladerate = deg2rad(4*360); % rps -> rad/sec
All four blade tips are assumed to have identical reflectivities while the reflectivity for the rotation
center is stronger.
c = 3e8;
fc = 5e9;
helicop = phased.RadarTarget('MeanRCS',[10 .1 .1 .1 .1],'PropagationSpeed',c,...
'OperatingFrequency',fc);
Assume the radar operates at 5 GHz with a simple pulse. The pulse repetition frequency is 20 kHz.
For simplicity, assume the signal propagates in free space.
fs = 1e6;
prf = 2e4;
lambda = c/fc;
wav = phased.RectangularWaveform('SampleRate',fs,'PulseWidth',2e-6,'PRF',prf);
ura = phased.URA('Size',4,'ElementSpacing',lambda/2);
tx = phased.Transmitter;
rx = phased.ReceiverPreamp;
env = phased.FreeSpace('PropagationSpeed',c,'OperatingFrequency',fc,...
'TwoWayPropagation',true,'SampleRate',fs);
txant = phased.Radiator('Sensor',ura,'PropagationSpeed',c,'OperatingFrequency',fc);
rxant = phased.Collector('Sensor',ura,'PropagationSpeed',c,'OperatingFrequency',fc);
At each pulse, the helicopter moves along its trajectory. Meanwhile, the blades keep rotating, and the
tips of the blades introduce additional displacement and angular speed.
NSampPerPulse = round(fs/prf);
Niter = 1e4;
y = complex(zeros(NSampPerPulse,Niter));
rng(2018);
for m = 1:Niter
% update helicopter motion
t = (m-1)/prf;
[scatterpos,scattervel,scatterang] = helicopmotion(t,tgtmotion,bladeang,bladelen,bladerate);
% simulate echo
x = txant(tx(wav()),scatterang); % transmit
xt = env(x,radarpos,scatterpos,radarvel,scattervel); % propagates to/from scatterers
xt = helicop(xt); % reflect
xr = rx(rxant(xt,scatterang)); % receive
y(:,m) = sum(xr,2); % beamform
end
This figure shows the range-Doppler response using the first 128 pulses of the received signal. You
can see the display of three returns at the target range of approximately 700 meters.
rdresp = phased.RangeDopplerResponse('PropagationSpeed',c,'SampleRate',fs,...
'DopplerFFTLengthSource','Property','DopplerFFTLength',128,'DopplerOutput','Speed',...
17-603
17 Featured Examples
'OperatingFrequency',fc);
mfcoeff = getMatchedFilter(wav);
plotResponse(rdresp,y(:,1:128),mfcoeff);
ylim([0 3000])
While the returns look as though they are from different targets, they are actually all from the same
target. The center return is from the rotation center, and is much stronger compared to the other two
returns. This intensity is because the reflection is stronger from the helicopter body when compared
to the blade tips. The plot shows a speed of -40 m/s for the rotation center. This value matches the
truth of the target radial speed.
tgtpos = scatterpos(:,1);
tgtvel = scattervel(:,1);
tgtvel_truth = radialspeed(tgtpos,tgtvel,radarpos,radarvel)
tgtvel_truth =
-43.6435
The other two returns are from the tips of the blades when they approach or depart the target at
maximum speed. From the plot, the speeds corresponding to these two approach and depart
detections are about 75 m/s and -160 m/s, respectively.
maxbladetipvel = [bladelen*bladerate;0;0];
vtp = radialspeed(tgtpos,-maxbladetipvel+tgtvel,radarpos,radarvel)
vtn = radialspeed(tgtpos,maxbladetipvel+tgtvel,radarpos,radarvel)
17-604
Introduction to Micro-Doppler Effects
vtp =
75.1853
vtn =
-162.4723
You can associate all three detections to the same target via further processing, but that topic is
beyond the scope of this example.
The time-frequency representation of micro-Doppler effects can reveal more information. This code
constructs a time-frequency representation in the detected target range bin.
mf = phased.MatchedFilter('Coefficients',mfcoeff);
ymf = mf(y);
[~,ridx] = max(sum(abs(ymf),2)); % detection via peak finding along range
pspectrum(ymf(ridx,:),prf,'spectrogram')
The figure shows the micro-Doppler modulation caused by blade tips around a constant Doppler shift.
The image suggests that each blade tip introduces a sinusoid-like Doppler modulation. As noted in the
figure below, within each period of the sinusoid, there are three extra sinusoids appearing at equal
distance. This appearance suggests that the helicopter is equipped with four equally spaced blades.
17-605
17 Featured Examples
hanno = helperAnnotateMicroDopplerSpectrogram(gcf);
In addition to the number of blades, the image also shows that the period of each sinusoid, Tr, is
about 250 ms. This value means that a blade returns to its original position after 250 ms. In this case,
the angular speed of the helicopter is about 4 revolutions per second, which matches the simulation
parameter.
Tp = 250e-3;
bladerate_est = 1/Tp
bladerate_est =
This image also shows the tip velocity Vt, which can be derived from the maximum Doppler. The
maximum Doppler is about 4 kHz away from the constant Doppler introduced by the bulk movement.
Calculate the detected maximum tip velocity.
Vt_detect = dop2speed(4e3,c/fc)/2
Vt_detect =
120
17-606
Introduction to Micro-Doppler Effects
This value is the maximum tip velocity along the radial direction. To obtain the correct maximum tip
velocity, the relative orientation must be taken into consideration. Because the blades are spinning in
a circle, the detection is not affected by the azimuth angle. Correct only the elevation angle for the
maximum tip velocity result.
doa = phased.MUSICEstimator2D('SensorArray',ura,'OperatingFrequency',fc,...
'PropagationSpeed',c,'DOAOutputPort',true,'ElevationScanAngles',-90:90);
[~,ang_est] = doa(xr);
Vt_est = Vt_detect/cosd(ang_est(2))
Vt_est =
164.0793
Based on the corrected maximum tip velocity and the blade-spinning rate, calculate the blade length.
bladelen_est = Vt_est/(bladerate_est*2*pi)
bladelen_est =
6.5285
Note that the result matches the simulation parameter of 6.5 meters. Information such as number of
blades, blade length, and blade rotation rate can help identify the model of the helicopter.
17-607
17 Featured Examples
Considering an ego car with an FMCW automotive radar system whose bandwidth is 250 MHz and
operates at 24 GHz.
bw = 250e6;
fs = bw;
fc = 24e9;
tm = 1e-6;
wav = phased.FMCWWaveform('SampleRate',fs,'SweepTime',tm,...
'SweepBandwidth',bw);
The ego car is traveling along the road. Along the way, there is a car parked on the side of street and
a human is walking out behind the car. The scene is illustrated in the following diagram
17-608
Introduction to Micro-Doppler Effects
Based on this setup, if the ego car cannot identify that a pedestrian is present, an accident may occur.
egocar_pos = [0;0;0];
egocar_vel = [30*1600/3600;0;0];
egocar = phased.Platform('InitialPosition',egocar_pos,'Velocity',egocar_vel,...
'OrientationAxesOutputPort',true);
parkedcar_pos = [39;-4;0];
parkedcar_vel = [0;0;0];
parkedcar = phased.Platform('InitialPosition',parkedcar_pos,'Velocity',parkedcar_vel,...
'OrientationAxesOutputPort',true);
17-609
17 Featured Examples
parkedcar_tgt = phased.RadarTarget('PropagationSpeed',c,'OperatingFrequency',fc,'MeanRCS',10);
ped_pos = [40;-3;0];
ped_vel = [0;1;0];
ped_heading = 90;
ped_height = 1.8;
ped = phased.BackscatterPedestrian('InitialPosition',ped_pos,'InitialHeading',ped_heading,...
'PropagationSpeed',c,'OperatingFrequency',fc,'Height',1.6,'WalkingSpeed',1);
chan_ped = phased.FreeSpace('PropagationSpeed',c,'OperatingFrequency',fc,...
'TwoWayPropagation',true,'SampleRate',fs);
chan_pcar = phased.FreeSpace('PropagationSpeed',c,'OperatingFrequency',fc,...
'TwoWayPropagation',true,'SampleRate',fs);
tx = phased.Transmitter('PeakPower',1,'Gain',25);
rx = phased.ReceiverPreamp('Gain',25,'NoiseFigure',10);
The following figure shows the range-Doppler map generated from the ego car's radar over time.
Because the parked car is a much stronger target than the pedestrian, the pedestrian is easily
shadowed by the parked car in the range-Doppler map. As a result, the map always shows a single
target.
This means that conventional processing cannot satisfy our needs under this situation.
17-610
Introduction to Micro-Doppler Effects
Micro-Doppler effect in time frequency domain can be a good candidate to identify if there is
pedestrian signature embedded in the radar signal. As an example, the following section simulates
the radar return for 2.5 seconds.
Tsamp = 0.001;
npulse = 2500;
xr = complex(zeros(round(fs*tm),npulse));
xr_ped = complex(zeros(round(fs*tm),npulse));
for m = 1:npulse
[pos_ego,vel_ego,ax_ego] = egocar(Tsamp);
[pos_pcar,vel_pcar,ax_pcar] = parkedcar(Tsamp);
[pos_ped,vel_ped,ax_ped] = move(ped,Tsamp,ped_heading);
[~,angrt_ped] = rangeangle(pos_ego,pos_ped,ax_ped);
[~,angrt_pcar] = rangeangle(pos_ego,pos_pcar,ax_pcar);
x = tx(wav());
xt_ped = chan_ped(repmat(x,1,size(pos_ped,2)),pos_ego,pos_ped,vel_ego,vel_ped);
xt_pcar = chan_pcar(x,pos_ego,pos_pcar,vel_ego,vel_pcar);
xt_ped = reflect(ped,xt_ped,angrt_ped);
xt_pcar = parkedcar_tgt(xt_pcar);
xr_ped(:,m) = rx(xt_ped);
xr(:,m) = rx(xt_ped+xt_pcar);
end
xd_ped = conj(dechirp(xr_ped,x));
xd = conj(dechirp(xr,x));
In the simulated signal, xd_ped contains only the pedestrian's return while xd contains the return
from both the pedestrian and the parked car. If we generate a spectrogram using only the return of
the pedestrian, we obtain a plot shown below.
clf;
spectrogram(sum(xd_ped),kaiser(128,10),120,256,1/Tsamp,'centered','yaxis');
clim = get(gca,'CLim');
set(gca,'CLim',clim(2)+[-50 0])
17-611
17 Featured Examples
Note that the swing of arms and legs produces many parabolic curves in the time frequency domain
along the way. Therefore such features can be used to determine whether a pedestrian exists in the
scene.
However, when we generate a spectrogram directly from the total return, we get the following plot.
spectrogram(sum(xd),kaiser(128,10),120,256,1/Tsamp,'centered','yaxis');
clim = get(gca,'CLim');
set(gca,'CLim',clim(2)+[-50 0])
17-612
Introduction to Micro-Doppler Effects
What we observe is that the parked car's return continue dominating the return, even in the time
frequency domain. Therefore the time frequency response shows only the Doppler relative to the
parked car. The drop of the Doppler frequency is due to the ego car getting closer to the parked car
and the relative speed drops towards 0.
To see if there is a return hidden behind the strong return, we can use the singular value
decomposition. The following plot shows the distribution of the singular value of the dechirped
pulses.
[uxd,sxd,vxd] = svd(xd);
clf
plot(10*log10(diag(sxd)));
xlabel('Rank');
ylabel('Singular Values');
hold on;
plot([56 56],[-40 10],'r--');
plot([100 100],[-40 10],'r--');
plot([110 110],[-40 10],'r--');
text(25,-10,'A');
text(75,-10,'B');
text(105,-10,'C');
text(175,-10,'D');
17-613
17 Featured Examples
From the curve, it is clear that there are approximately four regions. The region A represents the
most significant contribution to the signal, which is the parked car. The region D represents the
noise. Therefore, the region B and C are due to the mix of parked car return and the pedestrian
return. Because the return from the pedestrian is much weaker than the return from the parked car.
In region B, it can still be masked by the residue of the return from the parked car. Therefore, we
pick the region C to reconstruct the signal, and then plot the time frequency response again.
rk = 100:110;
xdr = uxd(:,rk)*sxd(rk,:)*vxd';
clf
spectrogram(sum(xdr),kaiser(128,10),120,256,1/Tsamp,'centered','yaxis');
clim = get(gca,'CLim');
set(gca,'CLim',clim(2)+[-50 0])
17-614
Introduction to Micro-Doppler Effects
With the return from the car successfully filtered, the micro-Doppler signature from the pedestrian
appears. Therefore, we can conclude that there is pedestrian in the scene and act accordingly to
avoid an accident.
Summary
This example introduces the basic concept of a micro-Doppler effect and shows its impact on the
target return. It also shows how to extract a micro-Doppler signature from the received I/Q signal and
then derive relevant target parameters from the micro-Doppler information.
References
[1] Chen, V. C., The Micro-Doppler Effect in Radar, Artech House, 2011
[2] Chen, V. C., F. Li, S.-S. Ho, and H. Wechsler, "Micro-Doppler Effect in Radar: Phenomenon, Model,
and Simulation Study", IEEE Transactions on Aerospace and Electronic Systems, Vol 42, No. 1,
January 2006
Utility Functions
prf = 2e4;
radarpos = [0;0;0];
17-615
17 Featured Examples
Nblades = size(BladeAng,2);
[tgtpos,tgtvel] = tgtmotion(1/prf);
RotAng = BladeRate*t;
scatterpos = [0 ArmLength*cos(RotAng+BladeAng);0 ArmLength*sin(RotAng+BladeAng);zeros(1,Nblades+1
scattervel = [0 -BladeRate*ArmLength*sin(RotAng+BladeAng);...
0 BladeRate*ArmLength*cos(RotAng+BladeAng);zeros(1,Nblades+1)]+tgtvel;
[~,scatterang] = rangeangle(scatterpos,radarpos);
end
17-616
Search and Track Scheduling for Multifunction Phased Array Radar
Radar Configuration
Assume the multifunction radar operates at S band and must detect targets between 2 km and 100
km, with a minimum target radar cross section (RCS) of 1 square meters.
fc = 2e9; % Radar carrier frequency (Hz)
c = 3e8; % Propagation speed (m/s)
lambda = c/fc; % Radar wavelength (m)
Waveform
To satisfy the range requirement, define and use a linear FM waveform with a 1 MHz bandwidth.
bw = 1e6;
fs = 1.5*bw;
prf = 1/range2time(maxrng,c);
dcycle = 0.1;
rngres =
150
Radar Antenna
The multifunction radar is equipped with a phased array that can electronically scan the radar beams
in space. Use a 50-by-50 rectangular array with elements separated by half wavelength to achieve a
half power beam width of approximately 2 degrees.
arraysz = 50;
ant = phased.URA('Size',arraysz,'ElementSpacing',lambda/2);
17-617
17 Featured Examples
ant.Element.BackBaffled = true;
arraystv = phased.SteeringVector('SensorArray',ant,'PropagationSpeed',c);
radiator = phased.Radiator('OperatingFrequency',fc, ...
'PropagationSpeed', c, 'Sensor',ant, 'WeightsInputPort', true);
collector = phased.Collector('OperatingFrequency',fc, ...
'PropagationSpeed', c, 'Sensor',ant);
beamw = rad2deg(lambda/(arraysz*lambda/2))
beamw =
2.2918
Use the detection requirements to derive the appropriate transmit power. Assume the noise figure on
the receiving preamplifier is 7 dB.
ppower = radareqpow(lambda,maxrng,snr_min,wav.PulseWidth,...
'RCS',tgtrcs,'Gain',ampgain+ant_snrgain);
tx = phased.Transmitter('PeakPower',ppower,'Gain',ampgain,'InUseOutputPort',true);
rx = phased.ReceiverPreamp('Gain',ampgain,'NoiseFigure',7,'EnableInputPort',true);
Signal Processing
The multifunction radar applies a sequence of operations, including matched filtering, time varying
gain, monopulse, and detection, to the received signal to generate range and angle measurements of
the detected targets.
% matched filter
mfcoeff = getMatchedFilter(wav);
mf = phased.MatchedFilter('Coefficients',mfcoeff,'GainOutputPort', true);
% monopulse
monfeed = phased.MonopulseFeed('SensorArray',ant,'PropagationSpeed',c,...
'OperatingFrequency',fc,'SquintAngle',1);
monest = getMonopulseEstimator(monfeed);
Data Processing
17-618
Search and Track Scheduling for Multifunction Phased Array Radar
The detections are fed into a tracker, which performs several operations. The tracker maintains a list
of tracks, that is, estimates of target states in the area of interest. If a detection cannot be assigned to
any track already maintained by the tracker, the tracker initiates a new track. In most cases, whether
the new track represents a true target or false target is unclear. At first, a track is created with a
tentative status. If enough detections are obtained, the track becomes confirmed. Similarly, if no
detections are assigned to a track, the track is coasted (predicted without correction). If the track has
a few missed updates, the tracker deletes the track.
The multifunction radar uses a tracker that associates the detections to the tracks using a global
nearest neighbor (GNN) algorithm.
tracker = trackerGNN('FilterInitializationFcn',@initMPARGNN,...
'ConfirmationThreshold',[2 3], 'DeletionThreshold',5,...
'HasDetectableTrackIDsInput',true,'AssignmentThreshold',100,...
'MaxNumTracks',2,'MaxNumSensors',1);
Group all radar components together in a structure for easier reference in the simulation loop.
mfradar.Tx = tx;
mfradar.Rx = rx;
mfradar.TxAnt = radiator;
mfradar.RxAnt = collector;
mfradar.Wav = wav;
mfradar.RxFeed = monfeed;
mfradar.MF = mf;
mfradar.TVG = tvg;
mfradar.DOA = monest;
mfradar.STV = arraystv;
mfradar.Tracker = tracker;
mfradar.IsTrackerInitialized = false;
This example assumes the radar is stationary at the origin with two targets in its field of view. One
target departs from the radar and is at a distance of around 50 km. The other target approaches the
radar and is 30 km away. Both targets have an RCS of 1 square meters.
ntgt = size(tgtpos,2);
tgtmotion = phased.Platform('InitialPosition',tgtpos,'Velocity',tgtvel);
target = phased.RadarTarget('MeanRCS',tgtrcs*ones(1,ntgt),'OperatingFrequency',fc);
channel = phased.FreeSpace('SampleRate',fs,'TwoWayPropagation',true,'OperatingFrequency',fc);
Group targets and propagation channels together in a structure for easier reference in the simulation
loop.
env.Target = target;
env.TargetMotion = tgtmotion;
env.Channel = channel;
17-619
17 Featured Examples
While using one multifunction radar to perform multiple tasks has its advantages, it also has a higher
cost and more sophisticated logic. In general, a radar has finite resources to spend on its tasks. If
resources are used for tracking tasks, then those resources are not available for searching tasks until
the tracking tasks are finished. Because of this resource allocation, a critical component when using a
multifunction radar is resource management.
Search Tasks
The search tasks can be considered as deterministic. In this example, a raster scan is used to cover
the desired airspace. If no other tasks exist, the radar scans the space one angular cell at a time. The
size of an angular cell is determined by the beam width of the antenna array.
Assume the radar scans a space from -30 to 30 degrees azimuth and 0 to 20 degrees elevation.
Calculate the angular search grid using the beam width.
The beam position grid and target scene are shown below.
sceneplot = helperMPARTaskPlot('initialize',scanangles,azscanangles,maxrng,beamw,tgtpos);
17-620
Search and Track Scheduling for Multifunction Phased Array Radar
The search beams are transmitted one at a time sequentially until the entire search area is covered.
Once the entire search area is covered, the radar repeats the search sequence. The searches are
performed along the azimuthal direction, one elevation angle a time. The search tasks are often
contained in a job queue.
searchq = struct('JobType','Search','BeamDirection',num2cell(scanangles,1),...
'Priority',1000,'WaveformIndex',1);
current_search_idx = 1;
Each job in the queue specifies the job type as well as the pointing direction of the beam. It also
contains a priority value for the job. This priority value is determined by the job type. This example
uses a value of 1000 as the priority for search jobs.
disp(searchq(current_search_idx))
JobType: 'Search'
BeamDirection: [2x1 double]
Priority: 1000
WaveformIndex: 1
Track Tasks
Unlike search tasks, track tasks cannot be planned. Track tasks are created only when a target is
detected by a search task or when the target has already been tracked. Track tasks are dynamic tasks
that get created and executed based on the changing scenario. Similar to search tasks, track tasks
are also managed in a job queue.
17-621
17 Featured Examples
trackq(10) = struct('JobType',[],'BeamDirection',[],'Priority',3000,'WaveformIndex',[],...
'Time',[],'Range',[],'TrackID',[]);
num_trackq_items = 0;
disp(trackq(1))
JobType: []
BeamDirection: []
Priority: []
WaveformIndex: []
Time: []
Range: []
TrackID: []
Group search and track queues together in a structure for easier reference in the simulation loop.
jobq.SearchQueue = searchq;
jobq.SearchIndex = current_search_idx;
jobq.TrackQueue = trackq;
jobq.NumTrackJobs = num_trackq_items;
Because a tracking job cannot be initialized before a target is detected, all tracking jobs start as
empty jobs. Once a job is created, it contains the information such as its job type, the direction of the
beam, and time to execute. The tracking task has a priority of 3000, which is higher than the priority
of 1000 for a search job. This higher priority value means that when the time is in conflict, the system
will execute the tracking job first.
The size limit for the queue in this example is set to 10.
Task Scheduling
In this example, for simplicity, the multifunction radar executes only one type of job within a small
time period, often referred to as a dwell, but can switch tasks at the beginning of each dwell. For
each dwell, the radar looks at all tasks that are due for execution and picks the one that has the
highest priority. Consequently, jobs that get postponed will now have an increased priority and are
more likely to be executed in the next dwell.
Simulation
This section of the example simulates a short run of the multifunction radar system. The entire
structure of the multifunction radar simulation is represented by this diagram.
17-622
Search and Track Scheduling for Multifunction Phased Array Radar
The simulation starts with the radar manager, which provides an initial job. Based on this job, the
radar transmits the waveform, simulates the echo, and applies signal processing to generate the
detection. The detection is processed by a tracker to create tracks for targets. The tracks then go
back to the radar manager. Based on the tracks and the knowledge about the scene, the radar
manager schedules new track jobs and picks the job for the next dwell.
The logic of the radar manager operation is shown in this flowchart and described in these steps.
17-623
17 Featured Examples
Assume a dwell is 10 ms. At the beginning of the simulation, the radar is configured to search one
beam at a time.
rng(2018);
current_time = 0;
17-624
Search and Track Scheduling for Multifunction Phased Array Radar
Npulses = 10;
numdwells = 200;
dwelltime = 0.01;
jobload.num_search_job = zeros(1,numdwells);
jobload.num_track_job = zeros(1,numdwells);
You can run the example in its entirety to see the plots being dynamically updated during execution.
In the top two plots, the color of the beams indicate the types of the current job: red for search,
yellow for confirm, and purple for track. The bottom two plots show the true locations (triangle),
detections (circle), and tracks (square) of the two targets, respectively. System log also displays in the
command line to explain the system behavior at the current moment. Next, the example shows more
details about several critical moments of the simulation.
Simulate the system behavior until it detects the first target. The simulation loop follows the previous
system diagram.
for dwell_idx = 1:14
% Scheduler to provide current job
[current_job,jobq] = getCurrentJob(jobq,current_time);
% Visualization
helperMPARTaskPlot('update',sceneplot,current_job,maxrng,beamw,tgtpos,allTracks,detection.Mea
% Update time
tgtpos = env.TargetMotion(dwelltime-Npulses/mfradar.Wav.PRF);
current_time = current_time+dwelltime;
end
17-625
17 Featured Examples
As expected, the radar gets a detection when the radar beam illuminates the target, as shown in the
figure. When this happens, the radar sends a confirmation beam immediately to make sure it is not a
false detection.
Next, show the results for the confirmation job. The rest of this example shows simplified code that
combines the simulation loop into a system simulation function.
[mfradar,env,jobq,jobload,current_time,tgtpos] = MPARSimRun(...
mfradar,env,jobq,jobload,current_time,dwelltime,sceneplot,maxrng,beamw,tgtpos,15,15);
17-626
Search and Track Scheduling for Multifunction Phased Array Radar
The figure now shows the confirmation beam. Once the detection is confirmed, a track is established
for the target, and a track job is scheduled to execute after a short interval.
This process repeats for every detected target until the revisit time, at which point the multifunction
radar stops the search sequence and performs the track task again.
[mfradar,env,jobq,jobload,current_time,tgtpos] = MPARSimRun(...
mfradar,env,jobq,jobload,current_time,dwelltime,sceneplot,maxrng,beamw,tgtpos,16,25);
17-627
17 Featured Examples
The results show that the simulation stops at a track beam. The zoomed-in figures around the two
targets show how the tracks are updated based on the detection and measurements. A new track job
for the next revisit is also added to the job queue during the execution of a track job.
This process repeats for each dwell. This simulation runs the radar system for a 2-second period.
After a while, the second target is detected beyond 50 km. Based on this information, the radar
manager reduces how often the system needs to track the second target. This reduction frees up
resources for other, more urgent needs.
[mfradar,env,jobq,jobload,current_time,tgtpos] = MPARSimRun(...
mfradar,env,jobq,jobload,current_time,dwelltime,sceneplot,maxrng,beamw,tgtpos,26,numdwells);
17-628
Search and Track Scheduling for Multifunction Phased Array Radar
This section of the example shows how the radar resource is distributed among different tasks. This
figure shows how the multifunction radar system in this example distributes its resources between
search and track.
L = 10;
searchpercent = sum(buffer(jobload.num_search_job,L,L-1,'nodelay'))/L;
trackpercent = sum(buffer(jobload.num_track_job,L,L-1,'nodelay'))/L;
figure;
plot((1:numel(searchpercent))*L*dwelltime,[searchpercent(:) trackpercent(:)]);
17-629
17 Featured Examples
xlabel('Time (s)');
ylabel('Job Percentage');
title('Resource Distribution between Search and Track');
legend('Search','Track','Location','best');
grid on;
The figure suggests that, at the beginning of the simulation, all resources are spent on search. Once
the targets are detected, the radar resources get split into 80% and 20% between search and track,
respectively. However, once the second target gets farther away, more resources are freed up for
search. The track load increases briefly when the time arrives to track the second target again.
Summary
This example introduces the concept of resource management and task scheduling in a
multifunctional phased array radar system. It shows that, with the resource management component,
the radar acts as a closed loop system. Although the multifunction radar in this example deals only
with search and track tasks, the concept can be extended to more realistic situations where other
functions, such as self-check and communication, are also involved.
References
[1] Walter Weinstock, "Computer Control of a Multifunction Radar", Practical Phased Array Antenna
Systems, Lex Book, 1997
Appendices
17-630
Search and Track Scheduling for Multifunction Phased Array Radar
getCurrentJob
The function getCurrentJob compares the jobs in the search queue and the track queue and selects
the job with the highest priority to execute.
searchq = jobq.SearchQueue;
trackq = jobq.TrackQueue;
searchidx = jobq.SearchIndex;
num_trackq_items = jobq.NumTrackJobs;
% Find the track job that is due and has the highest priority
readyidx = find([trackq(1:num_trackq_items).Time]<=current_time);
[~,maxpidx] = max([trackq(readyidx).Priority]);
taskqidx = readyidx(maxpidx);
% If the track job found has a higher priority, use that as the current job
% and increase the next search job priority since it gets postponed.
% Otherwise, the next search job due is the current job.
if ~isempty(taskqidx) && trackq(taskqidx).Priority >= searchq(searchqidx).Priority
currentjob = trackq(taskqidx);
for m = taskqidx+1:num_trackq_items
trackq(m-1) = trackq(m);
end
num_trackq_items = num_trackq_items-1;
searchq(searchqidx).Priority = searchq(searchqidx).Priority+100;
else
currentjob = searchq(searchqidx);
searchidx = searchqidx+1;
end
jobq.SearchQueue = searchq;
jobq.SearchIndex = searchidx;
jobq.TrackQueue = trackq;
jobq.NumTrackJobs = num_trackq_items;
generateEcho
The function generateEcho simulates the complex (I/Q) baseband representation of the target echo
received at the radar.
% Radar position
radarpos = [0;0;0];
radarvel = [0;0;0];
17-631
17 Featured Examples
for m = 1:Npulses
% Waveform
x = mfradar.Wav();
% Transmit
[xt,inuseflag] = mfradar.Tx(x);
w = mfradar.STV(fc,current_job.BeamDirection);
xt = mfradar.TxAnt(xt,tgtang,conj(w));
% Propagation
xp = env.Channel(xt,radarpos,tgtpos,radarvel,tgtvel);
xp = env.Target(xp);
% Pulse integration
if m == 1
xrsint = mfradar.Rx(xrs,~(inuseflag>0));
xrdazint = mfradar.Rx(xrdaz,~(inuseflag>0));
xrdelint = mfradar.Rx(xrdel,~(inuseflag>0));
else
xrsint = xrsint+mfradar.Rx(xrs,~(inuseflag>0));
xrdazint = xrdazint+mfradar.Rx(xrdaz,~(inuseflag>0));
xrdelint = xrdelint+mfradar.Rx(xrdel,~(inuseflag>0));
end
end
generateDetection
The function generateDetection applies signal processing techniques on the echo to generate
target detection.
tgrid = unigrid(0,1/mfradar.Wav.SampleRate,1/mfradar.Wav.PRF,'[)');
rgates = mfradar.TxAnt.PropagationSpeed*tgrid/2;
17-632
Search and Track Scheduling for Multifunction Phased Array Radar
updateTrackAndJob
The function updateTrackAndJob tracks the detection and then passes tracks to the radar manager
to update the track task queue.
trackq = jobq.TrackQueue;
num_trackq_items = jobq.NumTrackJobs;
17-633
17 Featured Examples
allTracks = [];
end
case 'Confirm'
% For confirm job, if the detection is confirmed, establish a track
% and create a track job corresponding to the revisit time
if ~isempty(detection)
trackid = current_job.TrackID;
[~,~,allTracks] = mfradar.Tracker(detection,current_time,trackid);
rng_est = detection.Measurement(3);
if rng_est >= 50e3
updateinterval = 0.5;
else
updateinterval = 0.1;
end
revisit_time = current_time+updateinterval;
predictedTrack = predictTracksToTime(mfradar.Tracker,trackid,revisit_time);
xpred = predictedTrack.State([1 3 5]);
[phipred,thetapred,rpred] = cart2sph(xpred(1),xpred(2),xpred(3));
num_trackq_items = num_trackq_items+1;
trackq(num_trackq_items) = struct('JobType','Track','Priority',3000,...
'BeamDirection',rad2deg([phipred;thetapred]),'WaveformIndex',1,'Time',revisit_tim
'Range',rpred,'TrackID',trackid);
if current_time < 0.3 || strcmp(current_job.JobType,'Track')
fprintf('\tCreated track %d at %f m',trackid,rng_est);
end
else
allTracks = [];
end
case 'Track'
% For track job, if there is a detection, update the track and
% schedule a track job corresponding to the revisit time. If there
% is no detection, predict and schedule a track job sooner so the
% target is not lost.
if ~isempty(detection)
trackid = current_job.TrackID;
[~,~,allTracks] = mfradar.Tracker(detection,current_time,trackid);
rng_est = detection.Measurement(3);
if rng_est >= 50e3
updateinterval = 0.5;
else
updateinterval = 0.1;
end
revisit_time = current_time+updateinterval;
predictedTrack = predictTracksToTime(mfradar.Tracker,trackid,revisit_time);
xpred = predictedTrack.State([1 3 5]);
[phipred,thetapred,rpred] = cart2sph(xpred(1),xpred(2),xpred(3));
num_trackq_items = num_trackq_items+1;
trackq(num_trackq_items) = struct('JobType','Track','Priority',3000,...
'BeamDirection',rad2deg([phipred;thetapred]),'WaveformIndex',1,'Time',revisit_tim
'Range',rpred,'TrackID',trackid);
17-634
Search and Track Scheduling for Multifunction Phased Array Radar
end
else
trackid = current_job.TrackID;
[~,~,allTracks] = mfradar.Tracker(detection,current_time,trackid);
[phipred,thetapred,rpred] = cart2sph(xpred(1),xpred(2),xpred(3));
num_trackq_items = num_trackq_items+1;
trackq(num_trackq_items) = struct('JobType','Track','Priority',3000,...
'BeamDirection',rad2deg([phipred;thetapred]),'WaveformIndex',1,'Time',revisit_tim
'Range',rpred,'TrackID',trackid);
end
jobq.TrackQueue = trackq;
jobq.NumTrackJobs = num_trackq_items;
17-635
17 Featured Examples
SAR generates a two-dimensional (2-D) image. The direction of flight is referred to as the cross-range
or azimuth direction. The direction of the antenna boresight (broadside) is orthogonal to the flight
path and is referred to as the cross-track or range direction. These two directions provide the basis
for the dimensions required to generate an image obtained from the area within the antenna beam-
width through the duration of the data collection window. The cross-track direction is the direction in
which pulses are transmitted. This direction provides the slant range to the targets along the flight
path. The energy received after reflection off the targets for each pulse must then be processed (for
range measurement and resolution). The cross-range or azimuth direction is the direction of the flight
path and it is meaningful to process the ensemble of the pulses received over the entire flight path in
this direction to achieve the required measurement and resolution. Correct focusing in both the
directions implies a successful generation of image in the range and cross-range directions. It is a
requirement for the antenna beam-width to be wide enough so that the target is illuminated for a
long duration by the beam as the platform moves along its trajectory. This will help provide more
phase information. The key terms frequently encountered when working with SAR are:
1 Cross-range (azimuth): this parameter defines the range along the flight path of the radar
platform.
2 Range: this parameter defines the range orthogonal to the flight path of the radar platform.
3 Fast-time: this parameter defines the time duration for operation of each pulse.
4 Slow-time: this parameter defines the cross-range time information. The slow time typically
defines the time instances at which the pulses are transmitted along the flight path.
17-636
Stripmap Synthetic Aperture Radar (SAR) Image Formation
Radar Configuration
Consider a SAR radar operating in C-band with a 4 GHz carrier frequency and a signal bandwidth of
50 MHz. This bandwidth yields a range resolution of 3 meters. The radar system collects data
orthogonal to the direction of motion of the platform as shown in the figure above. The received
signal is a delayed replica of the transmitted signal. The delay corresponds in general to the slant
range between the target and the platform. For a SAR system, the slant range will vary over time as
the platform traverses a path orthogonal to the direction of antenna beam. This section below focuses
on defining the parameters for the transmission waveform. The LFM sweep bandwidth can be
decided based on the desired range resolution.
The signal bandwidth is a parameter derived from the desired range resolution.
bw = c/(2*rangeResolution);
In a SAR system the PRF has dual implications. The PRF not only determines the maximum
unambiguous range but also serves as the sampling frequency in the cross-range direction. If the PRF
17-637
17 Featured Examples
is too low to achieve a higher unambiguous range, there is a longer duration pulses resulting in fewer
pulses in a particular region. At the same time, if the PRF is too high, the cross-range sampling is
achieved but at the cost of reduced range. Therefore, the PRF should be less than twice the Doppler
frequency and should also satisfy the criteria for maximum unambiguous range
prf = 1000;
aperture = 4;
tpd = 3*10^-6;
fs = 120*10^6;
Assume the speed of aircraft is 100 m/s with a flight duration of 4 seconds.
speed = 100;
flightDuration = 4;
radarPlatform = phased.Platform('InitialPosition', [0;-200;500], 'Velocity', [0; speed; 0]);
slowTime = 1/prf;
numpulses = flightDuration/slowTime +1;
maxRange = 2500;
truncrangesamples = ceil((2*maxRange/c)*fs);
fastTime = (0:1/fs:(truncrangesamples-1)/fs);
% Set the reference range for the cross-range processing.
Rc = 1000;
Configure the SAR transmitter and receiver. The antenna looks in the broadside direction orthogonal
to the flight direction.
Scene Configuration
In this example, three static point targets are configured at locations specified below. All targets have
a mean RCS value of 1 meter-squared.
17-638
Stripmap Synthetic Aperture Radar (SAR) Image Formation
% locations.
figure(1);h = axes;plot(targetpos(2,1),targetpos(1,1),'*g');hold all;plot(targetpos(2,2),targetpo
set(h,'Ydir','reverse');xlim([-10 10]);ylim([700 1500]);
title('Ground Truth');ylabel('Range');xlabel('Cross-Range');
The following section describes how the system operates based on the above configuration.
Specifically, the section below shows how the data collection is performed for a SAR platform. As the
platform moves in the cross-range direction, pulses are transmitted and received in directions
orthogonal to the flight path. A collection of pulses gives the phase history of the targets lying in the
illumination region as the platform moves. The longer the target lies in the illumination region, the
better the cross-range resolution for the entire image because the process of range and cross-range
focusing is generalized for the entire scene.
17-639
17 Featured Examples
sig = waveform();
% Use only the pulse length that will cover the targets.
sig = sig(1:truncrangesamples);
end
The received signal can now be visualized as a collection of multiple pulses transmitted in the cross-
range direction. The plots show the real part of the signal for the three targets. The range and cross-
range chirps can be seen clearly. The target responses can be seen as overlapping as the pulse-width
is kept longer to maintain average power.
17-640
Stripmap Synthetic Aperture Radar (SAR) Image Formation
Each row of the received signal, which contains all the information from each pulse, can be matched
filtered to get the de-chirped/range compressed signal.
The figure below shows the response after matched filtering has been performed on the received
signal. The phase histories of the three targets are clearly visible along the cross-range direction and
range focusing has been achieved.
17-641
17 Featured Examples
Azimuth Compression
There are multiple techniques to process the cross-range data and get the final image from the SAR
raw data once range compression has been achieved. In essence, the range compression helps
achieve resolution in the fast-time or range direction and the resolution in the cross-range direction is
achieved by azimuth or cross-range compression. Two such techniques are the range migration
algorithm and the back-projection algorithm which have been demonstrated in this example.
rma_processed = helperRangeMigration(cdata,fastTime,fc,fs,prf,speed,numpulses,c,Rc);
bpa_processed = helperBackProjection(cdata,rnggrid,fastTime,fc,fs,prf,speed,crossRangeResolution,
Plot the focused SAR image using the range migration algorithm and the approximate back projection
algorithm. Only a section of the image formed via the range migration algorithm is shown to
accurately point the location of the targets.
The range migration and accurate form of the backprojection algorithm as shown by [2] and [3]
provides theoretical resolution in the cross-track as well as along-track direction. Since the back-
projection used here is of the approximate form, the spread in azimuth direction is evident in case of
the back-projection whereas the data processed via range migration algorithm shows that theoretical
resolution is achieved.
figure(1);
imagesc((abs((rma_processed(1700:2300,600:1400).'))));
title('SAR Data focused using Range Migration algorithm ')
17-642
Stripmap Synthetic Aperture Radar (SAR) Image Formation
xlabel('Cross-Range Samples')
ylabel('Range Samples')
figure(2)
imagesc((abs(bpa_processed(600:1400,1700:2300))));
title('SAR Data focused using Back-Projection algorithm ')
xlabel('Cross-Range Samples')
ylabel('Range Samples')
17-643
17 Featured Examples
Summary
This example shows how to develop SAR processing leveraging a LFM signal in an airborne data
collection scenario. The example also shows how to generate an image from the received signal via
range migration and approximate form of the back-projection algorithms.
References
1 Cafforio, C., Prati, C. and Rocca, F., 1991. SAR data focusing using seismic migration techniques.
IEEE transactions on aerospace and electronic systems, 27(2), pp.194-207.
2 Cumming, I., Bennett, J., 1979. Digital Processing of Seasat SAR data. IEEE International
Conference on Acoustics, Speech, and Signal Processing.
3 Na Y., Lu Y., Sun H.,A Comparison of Back-Projection and Range Migration Algorithms for Ultra-
Wideband SAR Imaging, Fourth IEEE Workshop on Sensor Array and Multichannel Processing,
2006., Waltham, MA, 2006, pp. 320-324.
4 Yegulalp, A.F., 1999. Fast backprojection algorithm for synthetic aperture radar. Proceedings of
the 1999 IEEE Radar Conference.
Appendix
This function demonstrates the range migration algorithm for imaging the side looking synthetic
aperture radar.The pulsed compressed synthetic aperture data is considered in this algorithm.
17-644
Stripmap Synthetic Aperture Radar (SAR) Image Formation
frequencyRange = linspace(fc-fs/2,fc+fs/2,length(fastTime));
krange = 2*(2*pi*frequencyRange)/c;
kaz = 2*pi*linspace(-prf/2,prf/2,numPulses)./speed;
Generate a matrix of the cross-range wavenumbers to match the size of the received two-dimensional
SAR signal
kazimuth = kaz.';
kx = krange.^2-kazimuth.^2;
sdata =fftshift(fft(fftshift(fft(sigData,[],1),1),[],2),2);
Perform bulk compression to get the azimuth compression at the reference range. Perform filtering of
the 2-D FFT signal with the new cross-range wavenumber to achieve complete focusing at the
reference range and as a by-product, partial focusing of targets not lying at the reference range.
fsmPol = (sdata.').*kFinal;
Perform Stolt Interpolation to achieve focusing for targets that are not lying at the reference range
stoltPol = fsmPol;
for i = 1:size((fsmPol),1)
stoltPol(i,:) = interp1(kx(i,:),fsmPol(i,:),krange(1,:));
end
stoltPol(isnan(stoltPol)) = 1e-30;
stoltPol = stoltPol.*exp(-1i*krange.*Rc);
azcompresseddata = ifft2(stoltPol);
end
Back-Projection Algorithm
This function demonstrates the time-domain back projection algorithm for imaging the side-looking
synthetic aperture radar. The pulsed compressed synthetic aperture data is taken as input in this
algorithm. Initialize the output matrix.
data = zeros(size(sigdata));
azimuthDist = -200:speed/prf:200;%azimuth distance
Limit the range and cross-range pixels being processed to reduce processing time.
17-645
17 Featured Examples
crossrangeIdxStart = find(azimuthDist>crossrangelims(1),1);
crossrangeIdxStop = find(azimuthDist<crossrangelims(2),1,'last');
for i= rangeIdx(1):rangeIdx(2) % Iterate over the range indices
end
end
17-646
Planning Radar Network Coverage over Terrain
Import DTED-format terrain data for a region around Boulder, Colorado, US. The terrain file was
downloaded from the "SRTM Void Filled" data set available from the United States Geological Survey
(USGS). The file is DTED level-1 format and has a sampling resolution of about 90 meters. A single
DTED file defines a region that spans 1 degree in both latitude and longitude.
dtedfile = "n39_w106_3arc_v2.dt1";
attribution = "SRTM 3 arc-second resolution. Data available from the U.S. Geological Survey.";
addCustomTerrain("southboulder",dtedfile, ...
"Attribution",attribution)
Open Site Viewer using the imported terrain. Visualization with high-resolution satellite map imagery
requires an Internet connection.
viewer = siteviewer("Terrain","southboulder");
The region contains mountains to the west and flatter areas to the east. Radars will be placed in the
flat area to detect targets over the mountainous region. Define five candidate locations for placing the
radars and show them on the map. The candidate locations are chosen to correspond to local high
points on the map outside of residential areas.
Create collocated transmitter and receiver sites at each location to model monostatic radars, where
the radar antennas are assumed to be 10 meters above ground level.
17-647
17 Featured Examples
Zoom and rotate the map to view the 3-D terrain around the candidate radar sites. Select a site to
view the location, antenna height, and ground elevation.
17-648
Planning Radar Network Coverage over Terrain
Design a basic monostatic pulse radar system to detect non-fluctuating targets with 0.1 square meter
radar cross section (RCS) at a distance up to 35000 meters from the radar with a range resolution of
5 meters. The desired performance index is a probability of detection (Pd) of 0.9 and probability of
false alarm (Pfa) below 1e-6. The radars are assumed to be rotatable and support the same antenna
gain in all directions, where the antenna gain corresponds to a highly directional antenna array.
Use pulse integration to reduce the required SNR at the radar receiver. Use 10 pulses and compute
the SNR required to detect a target.
17-649
17 Featured Examples
numpulses = 10;
snrthreshold = albersheim(pd, pfa, numpulses); % Unit: dB
disp(snrthreshold);
4.9904
Define radar center frequency and antenna gain, assuming a highly directional antenna array.
fc = 10e9; % Transmitter frequency: 10 GHz
antgain = 38; % Antenna gain: 38 dB
c = physconst('LightSpeed');
lambda = c/fc;
Calculate the required peak pulse power (Watts) of the radar transmitter using the radar equation.
pulsebw = c/(2*rangeres);
pulsewidth = 1/pulsebw;
Ptx = radareqpow(lambda,maxrange,snrthreshold,pulsewidth,...
'RCS',tgtrcs,'Gain',antgain);
disp(Ptx)
3.1521e+05
Define a grid containing 2500 locations to represent the geographic range of positions for a moving
target in the region of interest. The region of interest spans 0.5 degrees in both latitude and
longitude and includes mountains to the west as well as some of the area around the radar sites. The
goal is to detect targets that are in the mountainous region to the west.
% Define region of interest
latlims = [39.5 40];
lonlims = [-105.6 -105.1];
Compute the minimum, maximum, and mean ground elevation for the target locations.
% Create temporary array of sites corresponding to target locations and query terrain
Z = elevation(txsite("Latitude",tgtlats,"Longitude",tgtlons));
[Zmin, Zmax] = bounds(Z);
Zmean = mean(Z);
Target altitude can be defined with reference to mean sea level or to ground level. Use ground level
as the reference and define a target altitude of 500 meters.
% Target altitude above ground level (m)
tgtalt = 500;
17-650
Planning Radar Network Coverage over Terrain
The radar equation includes free space path loss and has a parameter for additional losses. Use a
terrain propagation model to predict the additional path loss over terrain. Use Terrain Integrated
Rough Earth Model™ (TIREM™) from Alion Science if it is available, or else use the Longley-Rice
(aka ITM) model. TIREM™ supports frequencies up to 1000 GHz, whereas Longley-Rice is valid up to
20 GHz. Compute total additional loss including propagation from the radar to the target and then
back from the target to the receiver.
% Create a terrain propagation model, using TIREM or Longley-Rice
tiremloc = tiremSetup;
if ~isempty(tiremloc)
pm = propagationModel('tirem');
17-651
17 Featured Examples
else
pm = propagationModel('longley-rice');
end
% Compute additional path loss due to terrain and return distances between radars and targets
[L, ds] = helperPathlossOverTerrain(pm, rdrtxs, rdrrxs, tgtlats, tgtlons, tgtalt);
Use the radar equation to compute the SNR at each radar receiver for the signal reflected from each
target.
A target is detected if the radar receiver SNR exceeds the SNR threshold computed above. Consider
all combinations of radar sites and select the three sites that produce the highest number of
detections. Compute the SNR data as the best SNR available at the receiver of any of the selected
radar sites.
Display radar coverage showing the area where the SNR meets the required threshold to detect a
target. The three radar sites selected for best coverage are shown using red markers.
The coverage map shows straight edges on the north, east, and south sides corresponding to the
limits of the region of interest. The coverage map assumes that the radars can rotate and produce the
same antenna gain in all directions and that the radars can transmit and receive simultaneously so
that there is no minimum coverage range.
The coverage map has jagged portions on the western edge where the coverage areas are limited by
terrain effects. A smooth portion of the western edge appears where the coverage is limited by the
design range of the radar system, which is 35000 meters.
17-652
Planning Radar Network Coverage over Terrain
The analysis above optimized radar transmitter power and site locations based on a system that
integrates 10 pulses. Now investigate the impact on radar coverage for different modes of operation
of the system, where the number of pulses to integrate is varied. Compute the SNR thresholds
required to detect a target for varying number of pulses.
17-653
17 Featured Examples
ylabel("SNR (dB)")
grid on;
Show radar coverage map for SNR thresholds corresponding to a few different numbers of pulses to
integrate. Increasing the number of pulses to integrate decreases the required SNR and therefore
produces a larger coverage region.
colors = jet(4);
colors(4,:) = [0 1 0];
contour(rdrData, ...
"Levels",snrthresholds([1 2 5 10]), ...
"Colors",colors, ...
"LegendTitle",legendTitle)
17-654
Planning Radar Network Coverage over Terrain
Update the scenario so that target positions are 250 meters above ground level instead of 500 meters
above ground level. Rerun the same analysis as above to select the three best radar sites and
visualize coverage. The new coverage map shows that reducing the visibility of the targets also
decreases the coverage area.
% Target altitude above ground (m)
tgtalt = 250;
[L, ds] = helperPathlossOverTerrain(pm, rdrtxs, rdrrxs, tgtlats, tgtlons, tgtalt);
17-655
17 Featured Examples
end
17-656
Planning Radar Network Coverage over Terrain
contour(rdrData, ...
"Levels",snrthresholds([1 2 5 10]), ...
"Colors",colors, ...
"LegendTitle",legendTitle)
Conclusion
A monostatic radar system was designed to detect non-fluctuating targets with 0.1 square meter
radar cross section (RCS) at a distance up to 35000 meters. Radar sites were selected among five
candidate sites to optimize number of detections over a region of interest. Two target altitudes were
considered: 500 meters above ground level, and 250 meters above ground level. The coverage maps
suggest the importance of line-of-sight visibility between the radar and target in order to achieve
detection. The second scenario results in targets that are closer to ground level and therefore more
17-657
17 Featured Examples
likely to be blocked from line-of-sight visibility with a radar. This can be seen by rotating the map to
view terrain, where non-coverage areas are typically located in shadow regions of the mountains.
Clean up by closing Site Viewer and removing the imported terrain data.
close(viewer)
removeCustomTerrain("southboulder")
17-658
802.11ad Single Carrier Link with RF Beamforming in Simulink
Introduction
This model simulates an 802.11ad single carrier (SC) [ 1 ] link with RF beamforming. Multiple
packets are transmitted through free space, then RF beamformed, demodulated and the PLCP service
data units (PSDU) are recovered. The PSDUs are compared with those transmitted to determine the
packet error rate. The receiver performs packet detection, timing synchronization, carrier frequency
offset correction and unique word based phase tracking.
The MATLAB function block allows Simulink models to use MATLAB® functions. In this example, an
802.11ad SC link modeled in Simulink uses WLAN Toolbox functions called using MATLAB function
blocks. For an 802.11ad baseband simulation in MATLAB, see the example “802.11ad Packet Error
Rate Single Carrier PHY Simulation with TGay Channel” (WLAN Toolbox).
17-659
17 Featured Examples
System Architecture
The system diagnostics includes the display of equalized constellation and the obtained packet error
rate.
The following sections describe the transmitter and receiver in more detail.
Baseband Transmitter
The baseband transmitter block creates a random PSDU and encodes the bits to create a single
packet waveform based on the MCS and PSDU length values in the Model Parameters block. The
packet generator block uses the function wlanWaveformGenerator (WLAN Toolbox) to encode a
packet.
17-660
802.11ad Single Carrier Link with RF Beamforming in Simulink
RF Receiver
The RF receiver consists of amplifiers, phase shifters, Wilkinson 16:1 combiner and is implemented in
superheterodyne fashion.
17-661
17 Featured Examples
The phase shift applied to each element is calculated based on the beamforming direction. This is
provided by the user and indicates the direction of the main beam. The receiver maximizes the SNR
when the receiver's main beam points to the transmitter. Transmitter is omnidirectional and the
17-662
802.11ad Single Carrier Link with RF Beamforming in Simulink
receiver direction (az,el) indicates the direction of incident signal. The scenario where the receiver
direction and the beamforming direction are different is shown. In this case, there will be a reduction
in the received signal power leading to high packet error rate (PER) and error vector magnitude
(EVM). The results section shows these values.
Baseband Receiver
The baseband receiver has two components: packet detection and packet recovery.
If a packet is detected, the packet recovery subsystem is enabled to process the detected packet.
17-663
17 Featured Examples
In the packet decoder subsystem, the SC data field is extracted from the synchronized received
waveform. Then, the PSDU is recovered using the extracted field, channel, and noise power
estimates.
Results
Running the simulation displays the packet error rate. The model updates the PER after processing
each packet. The model also displays the equalized symbol constellation along with the EVM
measurement. Note that for statistically valid results, long simulation times are required.
By default, the main beam of the receive antenna array points towards the direction: azimuth = 0
deg. and elevation = 0 deg.
17-664
802.11ad Single Carrier Link with RF Beamforming in Simulink
If you change the Receiver direction value in the receive antenna array towards a proximity null
in the array radiation, the EVM increases and the packets cannot be successfully decoded.
17-665
17 Featured Examples
17-666
802.11ad Single Carrier Link with RF Beamforming in Simulink
If you change the Beamforming direction value in the RF receiver such that the main beam
points towards the transmitter, the EVM improves and packets are successfully decoded.
• Try changing the signal to noise ratio (SNR) value in the Model Parameters block. Increasing SNR
leads to lower packet error rates and improved EVM of equalized symbols constellation. The SNR
17-667
17 Featured Examples
specified is the signal to noise ratio at the input to the ADC, if a single receive chain is used. The
SNR accounts for free space path loss, thermal noise and the noise figure of RF components.
• You can change the array geometry and the number of elements in an array present in the receive
antenna array block. Increasing the number of antenna elements improves the EVM. The diversity
gain due to receiver antenna array can be observed in the equalized symbols constellation.
Appendix
• dmgCFOEstimate.m
• dmgPacketDetect.m
• dmgSingleCarrierFDE.m
• dmgSTFNoiseEstimate.m
• dmgTimingAndChannelEstimate.m
• dmgUniqueWordPhaseTracking.m
• helperFrequencyOffset.m
Selected Bibliography
1 IEEE Std 802.11ad™-2012 IEEE Standard for Information technology - Telecommunications and
information exchange between systems - Local and metropolitan area networks - Specific
requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications. Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band.
17-668
802.11ad Waveform Generation with Beamforming
Introduction
IEEE 802.11ad [ 1 ] defines the directional multi-gigabit (DMG) transmission format operating at 60
GHz. To overcome the large path loss experienced at 60 GHz, the IEEE 802.11ad standard is
designed to support directional beamforming. By using phased antenna arrays you can apply an
antenna weight vector (AWV) to focus the antenna pattern in the desired direction. Each packet is
transmitted on all array elements, but the AWV applies a phase shift to each element to steer the
transmission. The quality of a communication link can be improved by appending optional training
fields to DMG packets, and testing different AWVs at the transmitter or receiver. This process is
called beam refinement.
The STF and CE fields form the preamble. The preamble, header, and data fields of a DMG packet are
transmitted with the same AWV. For transmitter beam refinement training, up to 64 training (TRN)
subfields can be appended to the packet. Each TRN subfield is transmitted using a different AWV. This
allows the performance of up to 64 different AWVs to be measured, and the AWV for the preamble,
header, and data fields to be refined for subsequent transmissions. CE subfields are periodically
transmitted, one for every four TRN subfields, amongst the TRN subfields. Each CE subfield is
transmitted using the same AWV as the preamble. To allow the receiver to reconfigure AGC before
receiving the TRN subfields, the TRN subfields are preceded by AGC subfields. For each TRN
subfield, an AGC subfield is transmitted using the same AWV applied to the individual TRN subfield.
This allows a gain to be set at the receiver, suitable to measuring all TRN subfields. The diagram
below shows the packet structure with four AGC and TRN subfields numbered and highlighted.
Therefore, four AWVs are tested as part of beam refinement. The same AWVs are applied to AGC and
TRN subfields with the same number.
This example simulates transmitter training by applying different AWVs to each of the training
subfields to steer the transmission in multiple directions. The strength of each training subfield is
17-669
17 Featured Examples
evaluated at a receiver by evaluating the far-field plane wave to determine which transmission AWV
is optimal. This simulation does not include a channel or path loss.
This example requires WLAN Toolbox and Phased Array System Toolbox.
Waveform Specification
The waveform is configured for a DMG packet transmission with the orthogonal frequency-division
multiplexing (OFDM) physical layer, a 100-byte physical layer service data unit (PSDU), and four
transmitter training subfields. The four training subfields allow four AWVs to be tested for beam
refinement. Using the function wlanDMGConfig (WLAN Toolbox), create a DMG configuration object.
A DMG configuration object specifies transmission parameters.
dmg = wlanDMGConfig;
dmg.MCS = 13; % OFDM
dmg.TrainingLength = 4; % Use 4 training subfields
dmg.PacketType = 'TRN-T'; % Transmitter training
dmg.PSDULength = 100; % Bytes
Beamforming Specification
The transmitter antenna pattern is configured as a 16-element uniform linear array with half-
wavelength spacing. Using the objects phased.ULA and phased.SteeringVector, create the
phased array and the AWVs. The location of the receiver for evaluating the transmission is specified
as an offset from the boresight of the transmitter.
A uniform linear phased array with 16 elements is created to steer the transmission.
The AWVs are created using a phased.SteeringVector object. Five steering angles are specified
to create five AWVs, one for the preamble and data fields, and one for each of the four the training
subfields. The preamble and data fields are transmitted at boresight. The four training subfields are
transmitted at angles around boresight.
% The directional angle for the preamble and data is 0 degrees azimuth, no
% elevation, therefore at boresight. [Azimuth; Elevation]
preambleDataAngle = [0; 0];
% Each of the four training fields uses a different set of weights to steer
% to a slightly different direction. [Azimuth; Elevation]
trnAngle = [[-10; 0] [-5; 0] [5; 0] [10; 0]];
17-670
802.11ad Waveform Generation with Beamforming
Using the plotArrayResponse helper function, the array response shows the direction of the receiver
is most aligned with the direction of training subfield TRN-SF3.
plotArrayResponse(TxArray,receiverAz,fc,weights);
Use the configured DMG object and a PSDU filled with random data as inputs to the waveform
generator, wlanWaveformGenerator (WLAN Toolbox). The waveform generator modulates PSDU
bits according to a format configuration and also performs OFDM windowing.
% Generate packet
tx = wlanWaveformGenerator(psdu,dmg);
A phased.Radiator object is created to apply the AWVs to the waveform, combine the radiated
signal from each element to form a plane wave, and determine the plane wave at the angle of
interest, receiverAz. Each portion of the DMG waveform tx is passed through the Radiator with a
specified set of AWVs, and the angle at which to evaluate the plane wave.
17-671
17 Featured Examples
Radiator = phased.Radiator;
Radiator.Sensor = TxArray; % Use the uniform linear array
Radiator.WeightsInputPort = true; % Provide AWV as argument
Radiator.OperatingFrequency = fc; % Frequency in Hertz
Radiator.CombineRadiatedSignals = true; % Create plane wave
% Get the plane wave while applying the AWV to the preamble, header, and data
idx = (1:ind.DMGData(2));
planeWave(idx) = Radiator(tx(idx),steerAngle,preambleDataAWV);
% Get the plane wave while applying the AWV to the AGC and TRN subfields
for i = 1:dmg.TrainingLength
% AGC subfields
agcsfIdx = ind.DMGAGCSubfields(i,1):ind.DMGAGCSubfields(i,2);
planeWave(agcsfIdx) = Radiator(tx(agcsfIdx),steerAngle,trnAWV(:,i));
% TRN subfields
trnsfIdx = ind.DMGTRNSubfields(i,1):ind.DMGTRNSubfields(i,2);
planeWave(trnsfIdx) = Radiator(tx(trnsfIdx),steerAngle,trnAWV(:,i));
end
% Get the plane wave while applying the AWV to the TRN-CE
for i = 1:dmg.TrainingLength/4
trnceIdx = ind.DMGTRNCE(i,1):ind.DMGTRNCE(i,2);
planeWave(trnceIdx) = Radiator(tx(trnceIdx),steerAngle,preambleDataAWV);
end
The helper function plotDMGWaveform plots the magnitude of the beamformed plane wave. When
evaluating the magnitude of the beamformed plane wave we can see that the fields beamformed in
the direction of the receiver are stronger than other fields.
17-672
802.11ad Waveform Generation with Beamforming
Conclusion
This example showed how to generate an IEEE 802.11ad DMG waveform and apply AWVs to different
portions of the waveform. WLAN Toolbox was used to generate a standard compliant waveform, and
Phased Array System Toolbox was used to apply the AWVs and evaluate the magnitude of the
resultant plane wave in the direction of a receiver.
Appendix
• plotArrayResponse.m
• plotDMGWaveform.m
Selected Bibliography
1 IEEE Std 802.11ad™-2012 IEEE Standard for Information technology - Telecommunications and
information exchange between systems - Local and metropolitan area networks - Specific
requirements - Part 11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY)
Specifications. Amendment 3: Enhancements for Very High Throughput in the 60 GHz Band.
17-673
17 Featured Examples
Introduction
Target classification is an important function in modern radar systems. This example uses machine
and deep learning to classify radar echoes from a cylinder and a cone. Although this example uses the
synthesized I/Q samples, the workflow is applicable to real radar returns.
RCS Synthesis
The next section shows how to create synthesized data to train the learning algorithms.
The following code simulates the RCS pattern of a cylinder with a radius of 1 meter and a height of
10 meters. The operating frequency of the radar is 850 MHz.
c = 3e8;
fc = 850e6;
[cylrcs,az,el] = rcscylinder(1,1,10,c,fc);
helperTargetRCSPatternPlot(az,el,cylrcs);
17-674
Radar Target Classification Using Machine Learning and Deep Learning
The pattern can then be applied to a backscatter radar target to simulate returns from different
aspects angles.
cyltgt = phased.BackscatterRadarTarget('PropagationSpeed',c,...
'OperatingFrequency',fc,'AzimuthAngles',az,'ElevationAngles',el,'RCSPattern',cylrcs);
The following plot shows how to simulate 100 returns of the cylinder over time. It is assumed that the
cylinder under goes a motion that causes small vibrations around bore sight, as a result, the aspect
angle changes from one sample to the next.
rng default;
N = 100;
az = 2*randn(1,N);
el = 2*randn(1,N);
cylrtn = cyltgt(ones(1,N),[az;el]);
plot(mag2db(abs(cylrtn)));
xlabel('Time Index')
ylabel('Target Return (dB)');
title('Target Return for Cylinder');
17-675
17 Featured Examples
The return of the cone can be generated similarly. To create the training set, the above process is
repeated for 5 arbitrarily selected cylinder radii. In addition, for each radius, 10 motion profiles are
simulated by varying the incident angle following 10 randomly generated sinusoid curve around
boresight. There are 701 samples in each motion profile, so there are 701-by-50 samples. The process
is repeated for the cylinder target, which results in a 701-by-100 matrix of training data with 50
cylinder and 50 cone profiles. In the test set, we use 25 cylinder and 25 cone profiles to create a 701-
by-50 training set. Because of the long computation time, the training data is precomputed and
loaded below.
load('RCSClassificationReturnsTraining');
load('RCSClassificationReturnsTest');
As an example, the next plot shows the return for one of the motion profiles from each shape. The
plots show how the values change over time for both the incident azimuth angles and the target
returns.
subplot(2,2,1)
plot(cylinderAspectAngle(1,:))
ylim([-90 90])
grid on
title('Cylinder Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)');
subplot(2,2,3)
plot(RCSReturns.Cylinder_1); ylim([-50 50]);
grid on
title('Cylinder Return'); xlabel('Time Index'); ylabel('Target Return (dB)');
subplot(2,2,2)
plot(coneAspectAngle(1,:)); ylim([-90 90]); grid on;
17-676
Radar Target Classification Using Machine Learning and Deep Learning
title('Cone Aspect Angle vs. Time'); xlabel('Time Index'); ylabel('Aspect Angle (degrees)');
subplot(2,2,4);
plot(RCSReturns.Cone_1); ylim([-50 50]); grid on;
title('Cone Return'); xlabel('Time Index'); ylabel('Target Return (dB)');
Wavelet Scattering
In the wavelet scattering feature extractor, data is propagated through a series of wavelet
transforms, nonlinearities, and averaging to produce low-variance representations of time series.
Wavelet time scattering yields signal representations insensitive to shifts in the input signal without
sacrificing class discriminability.
The key parameters to specify in a wavelet time scattering decomposition are the scale of the time
invariant, the number of wavelet transforms, and the number of wavelets per octave in each of the
wavelet filter banks. In many applications, the cascade of two filter banks is sufficient to achieve good
performance. In this example, we construct a wavelet time scattering decomposition with the default
filter banks: 8 wavelets per octave in the first filter bank and 1 wavelet per octave in the second filter
bank. The invariance scale is set to 701 samples, the length of the data.
sf = waveletScattering('SignalLength',701,'InvarianceScale',701);
Next, we obtain the scattering transforms of both the training and test sets.
sTrain = sf.featureMatrix(RCSReturns{:,:},'transform','log');
sTest = sf.featureMatrix(RCSReturnsTest{:,:},'transform','log');
For this example, use the mean of the scattering coefficients taken along each path.
17-677
17 Featured Examples
TrainFeatures = squeeze(mean(sTrain,2))';
TestFeatures = squeeze(mean(sTest,2))';
Model Training
Fit a support vector machine model with a quadratic kernel to the scattering features and obtain the
cross-validation accuracy.
validationAccuracy = 100
Target Classification
Using the trained SVM, classify the scattering features obtained from the test set.
predLabels = predict(classificationSVM,TestFeatures);
accuracy = sum(predLabels == TestLabels )/numel(TestLabels)*100
accuracy = 100
17-678
Radar Target Classification Using Machine Learning and Deep Learning
For more complex data sets, a deep learning workflow may improve performance.
SqueezeNet is a deep convolutional neural network (CNN) trained for images in 1,000 classes as
used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). In this example, we reuse
the pre-trained SqueezeNet to classify radar returns belonging to one of two classes.
Load SqueezeNet.
snet = squeezenet;
snet.Layers
ans =
68x1 Layer array with layers:
17-679
17 Featured Examples
17-680
Radar Target Classification Using Machine Learning and Deep Learning
You see that SqueezeNet consists of 68 layers. Like all DCNNs, SqueezeNet cascades convolutional
operators followed by nonlinearities and pooling, or averaging. SqueezeNet expects an image input of
size 227-by-227-by-3, which you can see with the following code.
snet.Layers(1)
ans =
ImageInputLayer with properties:
Name: 'data'
InputSize: [227 227 3]
Hyperparameters
DataAugmentation: 'none'
Normalization: 'zerocenter'
NormalizationDimension: 'auto'
Mean: [1×1×3 single]
Additionally, SqueezeNet is configured to recognized 1,000 different classes, which you can see with
the following code.
snet.Layers(68)
ans =
ClassificationOutputLayer with properties:
Name: 'ClassificationLayer_predictions'
Classes: [1000×1 categorical]
OutputSize: 1000
Hyperparameters
LossFunction: 'crossentropyex'
In a subsequent section, we will modify select layers of SqueezeNet in order to apply it to our
classification problem.
SqueezeNet is designed to discriminate differences in images and classify the results. Therefore, in
order to use SqueezeNet to classify radar returns, we must transform the 1-D radar return time
series into an image. A common way to do this is to use a time-frequency representation (TFR). There
are a number of choices for a time-frequency representation of a signal and which one is most
appropriate depends on the signal characteristics. To determine which TFR may be appropriate for
this problem, randomly choose and plot a few radar returns from each class.
rng default;
idxCylinder = randperm(50,2);
idxCone = randperm(50,2)+50;
It is evident that the radar returns previously shown are characterized by slowing varying changes
punctuated by large transient decreases as described earlier. A wavelet transform is ideally suited to
sparsely representing such signals. Wavelets shrink to localize transient phenomena with high
temporal resolution and stretch to capture slowly varying signal structure. Obtain and plot the
continuous wavelet transform of one of the cylinder returns.
cwt(RCSReturns{:,idxCylinder(1)},'VoicesPerOctave',8)
17-681
17 Featured Examples
The CWT simultaneously captures both the slowly varying (low frequency) fluctuations and the
transient phenomena. Contrast the CWT of the cylinder return with one from a cone target.
cwt(RCSReturns{:,idxCone(2)},'VoicesPerOctave',8);
17-682
Radar Target Classification Using Machine Learning and Deep Learning
Because of the apparent importance of the transients in determining whether the target return
originates from a cylinder or cone target, we select the CWT as the ideal TFR to use. After obtaining
the CWT for each target return, we make images from the CWT of each radar return. These images
are resized to be compatible with SqueezeNet's input layer and we leverage SqueezeNet to classify
the resulting images.
Image Preparation
The helper function, helpergenWaveletTFImg, obtains the CWT for each radar return, reshapes
the CWT to be compatible with SqueezeNet, and writes the CWT as a jpeg file. To run
helpergenWaveletTFImg, choose a parentDir where you have write permission. This example
uses tempdir, but you may use any folder on your machine where you have write permission. The
helper function creates Training and Test set folders under parentDir as well as creating
Cylinder and Cone subfolders under both Training and Test. These folders are populated with
jpeg images to be used as inputs to SqueezeNet.
parentDir = tempdir;
helpergenWaveletTFImg(parentDir,RCSReturns,RCSReturnsTest)
Now use imageDataStore to manage file access from the folders in order to train SqueezeNet.
Create datastores for both the training and test data.
17-683
17 Featured Examples
In order to use SqueezeNet with this binary classification problem, we need to modify a couple layers.
First, we change the last learnable layer in SqueezeNet (layer 64) to have the same number of 1-by-1
convolutions as our new number of classes, 2.
lgraphSqueeze = layerGraph(snet);
convLayer = lgraphSqueeze.Layers(64);
numClasses = numel(categories(trainingData.Labels));
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','binaryconv', ...
'WeightLearnRateFactor',10, ...
'BiasLearnRateFactor',10);
lgraphSqueeze = replaceLayer(lgraphSqueeze,convLayer.Name,newLearnableLayer);
classLayer = lgraphSqueeze.Layers(end);
newClassLayer = classificationLayer('Name','binary');
lgraphSqueeze = replaceLayer(lgraphSqueeze,classLayer.Name,newClassLayer);
Finally, set the options for re-training SqueezeNet. Set the initial learn rate to 1e-4, set the maximum
number of epochs to 15, and the minibatch size to 10. Use stochastic gradient descent with
momentum.
ilr = 1e-4;
mxEpochs = 15;
mbSize =10;
opts = trainingOptions('sgdm', 'InitialLearnRate', ilr, ...
'MaxEpochs',mxEpochs , 'MiniBatchSize',mbSize, ...
'Plots', 'training-progress','ExecutionEnvironment','cpu');
Train the network. If you have a compatible GPU, trainNetwork automatically utilizes the GPU and
training should complete in less than one minute. If you do not have a compatible GPU,
trainNetwork utilizes the CPU and training should take around five minutes. Training times do vary
based on a number of factors. In this case, the training takes place on a cpu by setting the
ExecutionEnvironment parameter to cpu.
CWTnet = trainNetwork(trainingData,lgraphSqueeze,opts);
|========================================================================================|
| Epoch | Iteration | Time Elapsed | Mini-batch | Mini-batch | Base Learning |
| | | (hh:mm:ss) | Accuracy | Loss | Rate |
|========================================================================================|
|========================================================================================|
17-684
Radar Target Classification Using Machine Learning and Deep Learning
Use the trained network to predict target returns in the held-out test set.
predictedLabels = classify(CWTnet,testData,'ExecutionEnvironment','cpu');
accuracy = sum(predictedLabels == testData.Labels)/50*100
accuracy = 100
Plot the confusion chart along with the precision and recall. In this case, 100% of the test samples are
classified correctly.
17-685
17 Featured Examples
LSTM
In the final section of this example, an LSTM workflow is described. First the LSTM layers are
defined:
LSTMlayers = [ ...
sequenceInputLayer(1)
bilstmLayer(100,'OutputMode','last')
fullyConnectedLayer(2)
softmaxLayer
classificationLayer
];
options = trainingOptions('adam', ...
'MaxEpochs',30, ...
'MiniBatchSize', 150, ...
'InitialLearnRate', 0.01, ...
'GradientThreshold', 1, ...
'plots','training-progress', ...
'Verbose',false,'ExecutionEnvironment','cpu');
trainLabels = repelem(categorical({'cylinder','cone'}),[50 50]);
trainLabels = trainLabels(:);
trainData = num2cell(table2array(RCSReturns)',2);
testData = num2cell(table2array(RCSReturnsTest)',2);
testLabels = repelem(categorical({'cylinder','cone'}),[25 25]);
testLabels = testLabels(:);
RNNnet = trainNetwork(trainData,trainLabels,LSTMlayers,options);
17-686
Radar Target Classification Using Machine Learning and Deep Learning
predictedLabels = classify(RNNnet,testData,'ExecutionEnvironment','cpu');
accuracy = sum(predictedLabels == testLabels)/50*100
accuracy = 100
Conclusion
This example presents a workflow for performing radar target classification using machine and deep
learning techniques. Although this example used synthesized data to do training and testing, it can be
easily extended to accommodate real radar returns. Because of the signal characteristics, wavelet
techniques were used for both the machine learning and CNN approaches.
With this dataset we were also obtained to achieve similar accuracy by just feeding the raw data into
a LSTM. In more complicated datasets, the raw data may be too inherently variable for the model to
learn robust features from the raw data and you may have to resort to feature extraction prior to
using a LSTM.
17-687
17 Featured Examples
This example demonstrates how to process and visualize FMCW echoes acquired via the Demorad
Radar Sensor Platform with the Phased Array System Toolbox™. By default, I/Q samples and
operating parameters are read from a binary file that is provided with this example. Optionally, the
same procedure can be used to transmit, receive, and process FMCW reflections from live I/Q
samples with your own Demorad by following the instructions later in the example. Acquiring and
processing results in one environment decreases development time, and facilitates the rapid
prototyping of radar signal processing systems.
The Analog Devices® Demorad Radar Sensor Platform has an operating frequency of 24 GHz, and a
maximum bandwidth of 250 MHz. The array on the platform is comprised of 2 transmit elements, and
4 receive elements. The receive elements are spaced every half-wavelength of the operating
frequency, arranged as a linear array. The transmit elements are also arranged as a linear array, and
are spaced three half-wavelengths apart. The transmit and receive elements can be seen in the image
below.
17-688
Processing Radar Reflections Acquired with the Demorad Radar Sensor Platform
1 Download the Demorad drivers and MATLAB files from the USB-drive provided by Analog
Devices® with the Demorad
2 Save these to a permanent location on your PC
3 Add the folder containing the MATLAB files to the MATLAB path permanently
4 Power up the Demorad and connect to the PC via the Mini-USB port
5 Navigate to the Device Manager and look for the "BF707 Bulk Device"
6 Right-click on "BF707 Bulk Device" and select "Update driver"
7 Select the option "Browse my computer for driver software"
8 Browse to, and select the "drivers" folder from the USB-drive
9 Install the Phased Array System Toolbox™ Add-On for Demorad from the MATLAB Add-On
Manager
In this section, we setup the source of the I/Q samples as the default option of the binary file reader.
Also contained in this file are the parameters that define the transmitted FMCW chirp, which were
written at the time the file was created. If you would like to run this example using live echoes from
the Demorad, follow the steps in Installing the Drivers and Add-On and set the "usingHW" flag
below to "true". The example will then communicate with the Demorad to transmit an FMCW
waveform with the radar operating parameters defined below, and send the reflections to MATLAB.
The "setup" method below is defined by the object representing the Demorad. "setup" serves both to
power on, and send parameters to the Demorad.
if ~usingHW
% Read I/Q samples from a recorded binary file
radarSource = RadarBasebandFileReader('./DemoradExampleIQData.bb',256);
else
% Instantiate the Demorad Platform interface
radarSource = DemoradBoard;
Radar Capabilities
Based on the operating parameters defined above, the characteristics of the radar system can be
defined for processing and visualizing the reflections. The equations used for calculating the
capabilities of the radar system with these operating parameters can be seen below:
Range Resolution
17-689
17 Featured Examples
The range resolution (in meters) for a radar with a chirp waveform is defined by the equation
c0 = physconst('LightSpeed');
wfMetadata = radarSource.Metadata; % Struct of waveform metadata
bandwidth = wfMetadata.StopFrequency ...
- wfMetadata.StartFrequency; % Chirp bandwidth
rangeRes = c0/(2*bandwidth) % Range resolution (m)
rangeRes =
0.5996
Maximum Range
The Demorad platform transmits an FMCW pulse as a chirp, or sawtooth waveform. As such, the
theoretical maximum range of the radar (in meters) can be calculated using
where is the chirp rate. The effective range in practice may vary due to environmental factors such
as SNR, interference, or size of the testing facility.
maxRange =
158.2904
Beamwidth
The effective beamwidth of the radar board can be approximated by the equation
beamwidth =
17-690
Processing Radar Reflections Acquired with the Demorad Radar Sensor Platform
28.6479
With a transmit bandwidth of 250 MHz, and a 4-element receive array, the range and angular
resolution are sufficient to resolve multiple closely spaced objects. The I/Q samples recorded in the
binary file are returned from the Demorad platform without any additional digital processing. FMCW
reflections received by the Demorad are down-converted to baseband in hardware, decimated, and
transferred to MATLAB.
The algorithms used in the signal processing loop are initialized in this section. After receiving the
I/Q samples, a 3-pulse canceller removes detections from stationary objects. The output of the 3-pulse
canceller is then beamformed, and used to calculate the range response. A CFAR detector is used
following the range response algorithm to detect any moving targets.
3-Pulse Canceller
The 3-pulse canceller used following the acquisition of the I/Q samples removes any stationary clutter
in the environment. The impulse response of a 3-pulse canceller is given as
Range Response
The algorithms for calculating the range response are initialized below. For beamforming, the sensor
array is modeled using the number of antenna elements and the spacing of the receive elements. The
sensor array model and the operating frequency of the Demorad are required for the beamforming
algorithm. Because the Demorad transmits an FMCW waveform, the range response is calculated
using an FFT.
antennaArray = phased.ULA('NumElements',radarSource.NumChannels, ...
'ElementSpacing',rxElementSpacing);
CFAR Detector
A constant false alarm rate (CFAR) detector is then used to detect any moving targets.
17-691
17 Featured Examples
cfar = phased.CFARDetector('NumGuardCells',6,'NumTrainingCells',10);
Scopes
Setup the scopes to view the processed FMCW reflections. We set the viewing window of the range-
time intensity scope to 15 seconds.
timespan = 15;
Next, the samples are received from the binary file reader, processed, and shown in the scopes. This
loop will continue until all samples are read from the binary file. If using the Demorad, the loop will
continue for 30 seconds, defined by the "AcquisitionTime" property of the object that represents the
board. Only ranges from 0 to 15 meters are shown since we have a priori knowledge the target
recorded in the binary file is within this range.
while ~isDone(radarSource)
% Retrieve samples from the I/Q sample source
x = radarSource();
% Use the CFAR detector to detect any moving targets from 0 - 15 meters
maxViewRange = 15;
rng_grid = linspace(0,maxRange,NFFT).';
[~,maxViewIdx] = min(abs(rng_grid - maxViewRange));
detIdx = false(NFFT,1);
detIdx(1:maxViewIdx) = cfar(rangepow,1:maxViewIdx);
% Remove non-detections and set a noise floor at 1/10 of the peak value
rangepow = rangepow./max(rangepow(:)); % Normalize detections to 1 W
noiseFloor = 1e-1;
rangepow(~detIdx & (rangepow < noiseFloor)) = noiseFloor;
17-692
Processing Radar Reflections Acquired with the Demorad Radar Sensor Platform
17-693
17 Featured Examples
The scope shows a single target moving away from the Demorad Radar Sensor Platform until about
~10 meters away, then changing direction again to move back towards the platform. The range-time
intensity scope shows the detection ranges.
Summary
This example demonstrates how to interface with the Analog Devices® Demorad Radar Sensor
Platform to acquire, process, and visualize radar reflections from live data. This capability enables the
rapid prototyping and testing of radar signal processing systems in a single environment, drastically
decreasing development time.
17-694
Radar Waveform Classification Using Deep Learning
The first part of this example simulates a radar classification system that synthesizes three pulsed
radar waveforms and classifies them. The radar waveforms are:
• Rectangular
• Linear frequency modulation (LFM)
• Barker Code
A radar classification system does not exist in isolation. Rather, it resides in an increasingly occupied
frequency spectrum, competing with other transmitted sources such as communications systems,
radio, and navigation systems. The second part of this example extends the network to include
additional communication modulation types. In addition to the first set of radar waveforms, the
extended network synthesizes and identifies these communication waveforms:
This example primarily focuses on radar waveforms, with the classification being extended to include
a small set of amplitude and frequency modulation communications signals. See “Modulation
Classification with Deep Learning” (Communications Toolbox) for a full workflow of modulation
classification with a wide array of communication signals.
Generate 3000 signals with a sample rate of 100 MHz for each modulation type. Use
phased.RectangularWaveform for rectangular pulses, phased.LinearFMWaveform for LFM, and
phased.PhaseCodedWaveform for phase coded pulses with Barker code.
Each signal has unique parameters and is augmented with various impairments to make it more
realistic. For each waveform, the pulse width and repetition frequency will be randomly generated.
For LFM waveforms, the sweep bandwidth and direction are randomly generated. For Barker
waveforms, the chip width and number are generated randomly. All signals are impaired with white
Gaussian noise using the awgn function with a random signal-to-noise ratio in the range of [–6, 30]
dB. A frequency offset with a random carrier frequency in the range of [Fs/6, Fs/5] is applied to
each signal using the comm.PhaseFrequencyOffset object. Lastly, each signal is passed through a
multipath Rician fading channel, comm.RicianChannel.
17-695
17 Featured Examples
rng default
[wav, modType] = helperGenerateRadarWaveforms();
Plot the Fourier transform for a few of the LFM waveforms to show the variances in the generated
set.
figure
subplot(1,3,1)
Z = fft(wav{idLFM(1)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square
subplot(1,3,2)
Z = fft(wav{idLFM(2)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square
subplot(1,3,3)
Z = fft(wav{idLFM(3)},nfft);
plot(f/1e6,abs(Z(1:nfft/2)))
xlabel('Frequency (MHz)');ylabel('Amplitude');axis square
17-696
Radar Waveform Classification Using Deep Learning
figure
subplot(1,3,1)
wvd(wav{find(modType == "Rect",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('Rect')
subplot(1,3,2)
wvd(wav{find(modType == "LFM",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('LFM')
subplot(1,3,3)
wvd(wav{find(modType == "Barker",1)},100e6,'smoothedPseudo')
axis square; colorbar off; title('Barker')
To store the smoothed-pseudo Wigner-Ville distribution of the signals, first create the directory
TFDDatabase inside your temporary directory tempdir. Then create subdirectories in
TFDDatabase for each modulation type. For each signal, compute the smoothed-pseudo Wigner-Ville
distribution, and downsample the result to a 227-by-227 matrix. Save the matrix is a .png image file
in the subdirectory corresponding to the modulation type of the signal. The helper function
17-697
17 Featured Examples
helperGenerateTFDfiles performs all these steps. This process will take several minutes due to
the large database size and the complexity of the wvd algorithm. You can replace tempdir with
another directory where you have write permission.
parentDir = tempdir;
dataDir = 'TFDDatabase';
helperGenerateTFDfiles(parentDir,dataDir,wav,modType,100e6)
Create an image datastore object for the created folder to manage the image files used for training
the deep learning network. This step avoids having to load all images into memory. Specify the label
source to be folder names. This assigns each signal's modulation type according to the folder name.
folders = fullfile(parentDir,dataDir,{'Rect','LFM','Barker'});
imds = imageDatastore(folders,...
'FileExtensions','.png','LabelSource','foldernames','ReadFcn',@readTFDForSqueezeNet);
The network is trained with 80% of the data and tested on with 10%. The remaining 10% is used for
validation. Use the splitEachLabel function to divide the imageDatastore into training,
validation, and testing sets.
[imdsTrain,imdsTest,imdsValidation] = splitEachLabel(imds,0.8,0.1);
Before the deep learning network can be trained, define the network architecture. This example
utilizes transfer learning SqueezeNet, a deep CNN created for image classification. Transfer learning
is the process of retraining an existing neural network to classify new targets. This network accepts
image input of size 227-by-227-by-3. Prior to input to the network, the custom read function
readTFDForSqueezeNet will transform the two-dimensional time-frequency distribution to an RGB
image of the correct size. SqueezeNet performs classification of 1000 categories in its default
configuration.
Load SqueezeNet.
net = squeezenet;
Extract the layer graph from the network. Confirm that SqueezeNet is configured for images of size
227-by-227-by-3.
lgraphSqz = layerGraph(net);
lgraphSqz.Layers(1)
ans =
ImageInputLayer with properties:
Name: 'data'
InputSize: [227 227 3]
Hyperparameters
DataAugmentation: 'none'
Normalization: 'zerocenter'
NormalizationDimension: 'auto'
Mean: [1×1×3 single]
To tune SqueezeNet for our needs, three of the last six layers need to be modified to classify the three
radar modulation types of interest. Inspect the last six network layers.
17-698
Radar Waveform Classification Using Deep Learning
lgraphSqz.Layers(end-5:end)
ans =
6x1 Layer array with layers:
Replace the 'drop9' layer, the last dropout layer in the network, with a dropout layer of probability
0.6.
tmpLayer = lgraphSqz.Layers(end-5);
newDropoutLayer = dropoutLayer(0.6,'Name','new_dropout');
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newDropoutLayer);
The last learnable layer in SqueezeNet is a 1-by-1 convolutional layer, 'conv10'. Replace the layer
with a new convolutional layer with the number of filters equal to the number of modulation types.
Also increase the learning rate factors of the new layer.
numClasses = 3;
tmpLayer = lgraphSqz.Layers(end-4);
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',20, ...
'BiasLearnRateFactor',20);
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newLearnableLayer);
Replace the classification layer with a new one without class labels.
tmpLayer = lgraphSqz.Layers(end);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newClassLayer);
Inspect the last six layers of the network. Confirm the dropout, convolutional, and output layers have
been changed.
lgraphSqz.Layers(end-5:end)
ans =
6x1 Layer array with layers:
Choose options for the training process that ensures good network performance. Refer to the
trainingOptions documentation for a description of each option.
17-699
17 Featured Examples
'InitialLearnRate',1e-3, ...
'Shuffle','every-epoch', ...
'Verbose',false, ...
'Plots','training-progress',...
'ValidationData',imdsValidation);
Use the trainNetwork command to train the created CNN. Because of the dataset's large size, the
process may take several minutes. If your machine has a GPU and Parallel Computing Toolbox™, then
MATLAB automatically uses the GPU for training. Otherwise, it uses the CPU. The training accuracy
plots in the figure show the progress of the network's learning across all iterations. On the three
radar modulation types, the network classifies almost 100% of the training signals correctly.
trainedNet = trainNetwork(imdsTrain,lgraphSqz,options);
Use the trained network to classify the testing data using the classify command. A confusion
matrix is one method to visualize classification performance. Use the confusionchart command to
calculate and visualize the classification accuracy. For the three modulation types input to the
network, almost all of the phase coded, LFM, and rectangular waveforms are correctly identified by
the network.
predicted = classify(trainedNet,imdsTest);
figure
confusionchart(predicted,imdsTest.Labels,'Normalization','column-normalized')
17-700
Radar Waveform Classification Using Deep Learning
The frequency spectrum of a radar classification system must compete with other transmitted
sources. Let's see how the created network extends to incorporate other simulated modulation types.
Another MathWorks example, “Modulation Classification with Deep Learning” (Communications
Toolbox), performs modulation classification of several different modulation types using
Communications Toolbox™. The helper function helperGenerateCommsWaveforms generates and
augments a subset of the modulation types used in that example. Since the WVD loses phase
information, a subset of only the amplitude and frequency modulation types are used.
See the example link for an in-depth description of the workflow necessary for digital and analog
modulation classification and the techniques used to create these waveforms. For each modulation
type, use wvd to extract time-frequency features and visualize.
[wav, modType] = helperGenerateCommsWaveforms();
figure
subplot(2,3,1)
wvd(wav{find(modType == "GFSK",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('GFSK')
subplot(2,3,2)
wvd(wav{find(modType == "CPFSK",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('CPFSK')
subplot(2,3,3)
wvd(wav{find(modType == "B-FM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('B-FM')
subplot(2,3,4)
17-701
17 Featured Examples
wvd(wav{find(modType == "SSB-AM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('SSB-AM')
subplot(2,3,5)
wvd(wav{find(modType == "DSB-AM",1)},200e3,'smoothedPseudo')
axis square; colorbar off; title('DSB-AM')
Use the helper function helperGenerateTFDfiles again to compute the smoothed pseudo WVD for
each input signal. Create an image datastore object to manage the image files of all modulation types.
helperGenerateTFDfiles(parentDir,dataDir,wav,modType,200e3)
folders = fullfile(parentDir,dataDir,{'Rect','LFM','Barker','GFSK','CPFSK','B-FM','SSB-AM','DSB-A
imds = imageDatastore(folders,...
'FileExtensions','.png','LabelSource','foldernames','ReadFcn',@readTFDForSqueezeNet);
Again, divide the data into a training set, a validation set, and a testing set using the
splitEachLabel function.
rng default
[imdsTrain,imdsTest,imdsValidation] = splitEachLabel(imds,0.8,0.1);
Previously, the network architecture was set up to classify three modulation types. This must be
updated to allow classification of all eight modulation types of both radar and communication signals.
This is a similar process as before, with the exception that the fullyConnectedLayer now requires
an output size of eight.
17-702
Radar Waveform Classification Using Deep Learning
numClasses = 8;
net = squeezenet;
lgraphSqz = layerGraph(net);
tmpLayer = lgraphSqz.Layers(end-5);
newDropoutLayer = dropoutLayer(0.6,'Name','new_dropout');
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newDropoutLayer);
tmpLayer = lgraphSqz.Layers(end-4);
newLearnableLayer = convolution2dLayer(1,numClasses, ...
'Name','new_conv', ...
'WeightLearnRateFactor',20, ...
'BiasLearnRateFactor',20);
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newLearnableLayer);
tmpLayer = lgraphSqz.Layers(end);
newClassLayer = classificationLayer('Name','new_classoutput');
lgraphSqz = replaceLayer(lgraphSqz,tmpLayer.Name,newClassLayer);
Use the trainNetwork command to train the created CNN. For all modulation types, the training
converges with an accuracy of about 95% correct classification.
trainedNet = trainNetwork(imdsTrain,lgraphSqz,options);
17-703
17 Featured Examples
Use the classify command to classify the signals held aside for testing. Again, visualize the
performance using confusionchart.
predicted = classify(trainedNet,imdsTest);
figure;
confusionchart(predicted,imdsTest.Labels,'Normalization','column-normalized')
17-704
Radar Waveform Classification Using Deep Learning
For the eight modulation types input to the network, over 99% of B-FM, CPFSK, GFSK, Barker, and
LFM modulation types were correctly classified. On average, over 85% of AM signals were correctly
identified. From the confusion matrix, a high percentage of SSB-AM signals were misclassified as
DSB-AM, and DSB-AM signals as SSB-AM.
Let us investigate a few of these misclassifications to gain insight into the network's learning process.
Use the readimage function on the image datastore to extract from the test dataset a single image
from each class. The displayed WVD visually looks very similar. Since DSB-AM and SSB-AM signals
have a very similar signature, this explains in part the network's difficulty in correctly classifying
these two types. Further signal processing could make the differences between these two modulation
types clearer to the network and result in improved classification.
DSB_DSB = readimage(imdsTest,find((imdsTest.Labels == 'DSB-AM') & (predicted == 'DSB-AM'),1));
DSB_SSB = readimage(imdsTest,find((imdsTest.Labels == 'DSB-AM') & (predicted == 'SSB-AM'),1));
SSB_DSB = readimage(imdsTest,find((imdsTest.Labels == 'SSB-AM') & (predicted == 'DSB-AM'),1));
SSB_SSB = readimage(imdsTest,find((imdsTest.Labels == 'SSB-AM') & (predicted == 'SSB-AM'),1));
figure
subplot(2,2,1)
imagesc(DSB_DSB(:,:,1))
axis square; title({'Actual Class: DSB-AM','Predicted Class: DSB-AM'})
subplot(2,2,2)
imagesc(DSB_SSB(:,:,1))
axis square; title({'Actual Class: DSB-AM','Predicted Class: SSB-AM'})
subplot(2,2,3)
imagesc(SSB_DSB(:,:,1))
axis square; title({'Actual Class: SSB-AM','Predicted Class: DSB-AM'})
17-705
17 Featured Examples
subplot(2,2,4)
imagesc(SSB_SSB(:,:,1))
axis square; title({'Actual Class: SSB-AM','Predicted Class: SSB-AM'})
Summary
This example showed how radar and communications modulation types can be classified by using
time-frequency techniques and a deep learning network. Further efforts for additional improvement
could be investigated by utilizing time-frequency analysis available in Wavelet Toolbox™ and
additional Fourier analysis available in Signal Processing Toolbox™.
References
[1] Brynolfsson, Johan, and Maria Sandsten. "Classification of one-dimensional non-stationary signals
using the Wigner-Ville distribution in convolutional neural networks." 25th European Signal
Processing Conference (EUSIPCO). IEEE, 2017.
[2] Liu, Xiaoyu, Diyu Yang, and Aly El Gamal. "Deep neural network architectures for modulation
classification." 51st Asilomar Conference on Signals, Systems and Computers. 2017.
[3] `Wang, Chao, Jian Wang, and Xudong Zhang. "Automatic radar waveform recognition based on
time-frequency analysis and convolutional neural network." IEEE International Conference on
Acoustics, Speech and Signal Processing (ICASSP). 2017.
17-706
Hybrid MIMO Beamforming with QSHB and HBPS Algorithms
Introduction
5G and other modern wireless communication systems extensively use MIMO beamforming
technology for signal to noise ratio (SNR) enhancement and spatial multiplexing to improve the data
throughput in scatterer rich environments. In a scatterer-rich environment, there may not exist line-
of-sight (LOS) paths between the transmit and receive antennas. To gain the high throughput, MIMO
beamforming implements precoding on the transmitter side and combining on the receiver side to
increase SNR and separate spatial channels. A full digital beamforming structure requires each
antenna to have a dedicated RF-to-baseband chain, which makes the overall hardware expensive and
power consumption high. As a solution, hybrid MIMO beamforming is proposed [1], in which fewer
RF-to-baseband chains are employed and partial of precoding and combining are implemented in the
RF portion. With deliberate selection of the weights for precoding and combining, hybrid
beamforming can achieve comparable performance as that of full beamforming.
In this example, we introduce a Simulink model with hybrid MIMO beamforming. This model shows
two hybrid beamforming algorithms: Quantized Sparse Hybrid Beamforming (QSHB) [2] and Hybrid
Beamforming with Peak Search (HBPS).
In the figure, is the number of signal streams; is the number of transmit antennas; is the
number of transmit RF chains; is the number of receive antennas; and is the number of
receive RF chains. In this example, two signal streams, 64 transmit antennas, 4 transmit RF chains,
16 receive antennas, and 4 receive RF chains.
The scattering channel is denoted by . The hybrid beamforming weights are represented by the
analog precoder , digital precoder , analog combiner , and digital combiner . For a
more detailed introduction to hybrid beamforming, please refer to the MATLAB “Introduction to
Hybrid Beamforming” on page 17-423 example.
The Simulink model consists of four main components: MIMO Transmitter, MIMO Channel, MIMO
Receiver, and Weights Calculation.
17-707
17 Featured Examples
The MIMO transmitter generates the signal stream and then applies the precoding. The modulated
signal is propagated through a scattering channel defined in the MIMO channel and then decoded
and demodulated at the receiver side.
The MIMO scattering channel is represented by a channel matrix. In addition, this example uses an
enabled subsystem to periodically change this matrix to simulate the fact that a MIMO channel may
vary over time.
In a hybrid beamforming system, both the precoding and the corresponding combining process are
done partly at baseband and partly in the RF band. In general, the beamforming achieved in the RF
band only involves phase shifts. Therefore, a critical part in such a system is to determine how to
distribute the weights between the baseband and the RF band based on the channel. This is done in
the Weight Calculation block where the precoding weights, Fbb and FrfAng, and combining weights,
17-708
Hybrid MIMO Beamforming with QSHB and HBPS Algorithms
Wbb and WrfAng, are computed based on the channel matrix, H. In this example, we assume the
channel matrix is known and provide both QSHB and HBPS algorithms.
Literature [2, 3] shows that given the channel matrix, H, of a MIMO scattering channel, the hybrid
beamforming weights can be computed via an iterative algorithms [2]. Using an orthogonal matching
pursuit algorithm, the resulting analog precoding/combining weights are just steering vectors
corresponding to the dominant modes of the channel matrix. For the detailed description of the
algorithm, please refer to the “Introduction to Hybrid Beamforming” on page 17-423 example.
HBPS is a simplified version of QSHB. Instead of searching for the dominant mode of channel matrix
iteratively, HBPS projects all the digital weights into a grid of directions and identifies the and
peaks to form the corresponding analog beamforming weights. This works well especially for
large arrays, like arrays used in massive MIMO systems, since for large arrays, the directions are
more likely to be orthogonal.
Because the channel matrix can change over time, the weights computation also needs to be
performed periodically to accommodate the channel variation.
17-709
17 Featured Examples
QSHB
Following figures shows the recovered 16 QAM symbol streams at the receiver using QSHB
algorithms. The resulting constellation shows that compared to the source constellation, the
recovered symbols properly located in both streams. This means that using the hybrid beamforming
technique, we can improve the system capacity by sending the two streams simultaneously. In
addition, the constellation diagram shows that the variance of the first recovered stream is better
than the second recovered stream as the points are less dispersed in the constellation of the first
stream. This is because the first stream uses the most dominant mode of the MIMO channel so it has
the best SNR.
17-710
Hybrid MIMO Beamforming with QSHB and HBPS Algorithms
17-711
17 Featured Examples
HBPS
The result of HBPS is shown in the following figures. The constellation diagram shows that it achieves
similar performance compared to QSHB. This means that the HBPS is a good choice for the simulated
64x16 MIMO system.
17-712
Hybrid MIMO Beamforming with QSHB and HBPS Algorithms
Summary
This example provides the Simulink model of two hybrid beamforming methods, QSHB and HBPS.
The MIMO scattering channel is used to provide a realistic channel model for massive MIMO
systems. The Simulink model is partitioned according to the functions in the signal flow, which gives
guidance for hardware implementation. For a given H, the number of symbols can vary to simulate
the variable coherent channel length. With this Simulink model, various system parameters and new
hybrid beamforming algorithms can be studied. The system structure facilitates the hardware
implementation.
Reference
[1] Andreas F. Molisch, et al. "Hybrid Beamforming for Massive MIMO: A Survey", IEEE
Communications Magazine, Vol. 55, No. 9, September 2017, pp. 134-141
[2] Oma El Ayach, et al. "Spatially Sparse Precoding in Millimeter wave MIMO Systems, IEEE
Transactions on Wireless Communications", Vol. 13, No. 3, March 2014
[3]. Emil Bjornson, Jakob Hoydis, Luca Sanguinetti, "Massive MIMO Networks: Spectral, Energy, and
Hardware Efficiency", Foundations and Trends in Signal Processing: Vol. 11, No. 3-4, 2017
17-713
17 Featured Examples
17-714
Display Micro-Doppler Shift of Moving Bicyclist
close_system('BicyclistMicrospectrumExample');
17-715
17 Featured Examples
1 Import an antenna element pattern generated with Lumerical tools into MATLAB
2 Design a linear phased array using the Lumerical antenna pattern for each element in the array
3 Determine the array element spacing and weighting for the array such that the azimuth pattern
and elevation pattern (generated from the Lumerical tools) are closely matched between a
desired steering range
A related example can be found on Lumerical's website at Lumerical Lidar Antenna Example. The
Lumerical's example provides more details on how the antenna element is designed with the FDE
solver; how the antenna element design is verified and extracted with 3D FDTD technique; and how
the array designed using Phased Array System Toolbox is integrated in Lumerical's INTERCONNECT
software.
Introduction
Lidar is used as a perception sensor in autonomous systems. Lidar sensors are capable of ranging
millions of points per second due to high angular resolutions and fast steering speeds. Beam steering
in Lidar architectures can be accomplished with optical phased arrays. This example shows how to
design the integrated optical phased array antenna which can be used for both transmitting and
receiving functions.
The antenna occupies a large bandwidth. The data set generated with Lumerical tools contains
antenna responses for 50 frequencies.
nfreq = numel(freqVector); % 50 frequency vectors
ant = phased.CustomAntennaElement('FrequencyVector',freqVector,...
'FrequencyResponse',zeros(1,nfreq),...
'AzimuthAngles',az,'ElevationAngles',el,...
'MagnitudePattern',pat_azel,'PhasePattern',zeros(size(pat_azel)));
The array to be designed is a 48-element linear array along azimuth direction. The goal is to
The array to be designed is a 48-element linear array with a hamming taper. The number of elements
was chosen in this example to provide a narrow beam. The element spacing is set at 1.2 wavelength
corresponding to the center frequency due to physical constrains in building Lidar arrays. Since the
spacing is larger than half wavelength, grating lobe may occur. However, in this application, the array
only scans a limited range around the boresight. Therefore, grating lobes are not a concern.
c = 3e8;
lambda = c/freqVector(25);
N = 48;
antarray = phased.ULA(N,1.2*lambda,'Element',ant,'Taper',hamming(N));
stv = phased.SteeringVector('SensorArray',antarray,'PropagationSpeed',c);
17-716
Array Synthesis for Lidar Systems
Following figures show the resulting array patterns at boresight for 3 frequency values (frequency
bands 1, 25, and 50) to see the resulting 3D beam pattern across the full range. Note how the main
beam changes with frequency. In the plots below, the beams are steered at 0 degrees azimuth.
n = 1;
fc = freqVector(n);
pattern(antarray,fc,'Type','powerdb','Weights',stv(fc,svang));
title(sprintf('Array Pattern for Frequency Band %d',n));
snapnow;
n = 25;
fc = freqVector(n);
pattern(antarray,fc,'Type','powerdb','Weights',stv(fc,svang));
title(sprintf('Array Pattern for Frequency Band %d',n));
snapnow;
n = 50;
fc = freqVector(n);
pattern(antarray,fc,'Type','powerdb','Weights',stv(fc,svang));
title(sprintf('Array Pattern for Frequency Band %d',n));
snapnow;
17-717
17 Featured Examples
17-718
Array Synthesis for Lidar Systems
In this example, the array steering in elevation is done using different carrier frequencies. However,
the array steering in azimuth is done by weighting the elements in the linear array. Therefore, our
goal, through optimization, is to find weights and element spacing such that the shape of the beam in
the azimuth cut matches the shape in the elevation cut.
To get the best match across the frequency range, we start with the elevation cut of the middle
frequency value (band 25) as the desired shape. Because the application only requires the scanning
between 20 degrees in azimuth, We will focus the pattern within the 40 degrees region to ensure
there is no grating lobes come into 20 degrees region during scanning. Note that reflectors will be
used on the physical array to ensure other transmissions are not sent out at angles outside 20
degrees.
azimuth = -40:40;
n = 25; % Use center frequency as the basis for optimization
fc = freqVector(n);
Beam_d = pattern(antarray,fc,azimuth,0,'Type','efield','Weights',stv(fc,svang),'Normalize',false)
antpat = pattern(ant,fc,azimuth,0,'Type','efield','Normalize',false).';
The objective function is set to minimize the distance between the desired pattern and the one that is
generated as a result of synthesis. For this optimization, we want to generate a common spacing
value between elements and unique real weights for each element to facilitate the implementation
phase in the Lumerical Interconnect tool. In addition, to ensure the lidar array can be realized, a
starting point of 1.1 wavelength is set.
17-719
17 Featured Examples
objfun = @(x)norm(abs((x(1:N))'*...
steervec((-(N-1)/2:(N-1)/2)*x(end),azimuth).*antpat)-Beam_d);
x_ini = [w_i_re;lambda_i];
Next plot shows the comparison between the desired pattern and the synthesized pattern.
azplot = -40:40;
Beam_d_plot = pattern(antarray,fc,azplot,0,'Type','efield','Weights',stv(fc,svang),'Normalize',fa
antpat_plot = pattern(ant,fc,azplot,0,'Type','efield','Normalize',false).';
Beam_syn_plot = abs((x_o(1:N))'*steervec((-(N-1)/2:(N-1)/2)*x_o(end),azplot).*antpat_plot);
plot(azplot,mag2db(Beam_d_plot),'-',azplot,mag2db(Beam_syn_plot),'--');
legend('Desired','Synthesized')
title(sprintf('Frequency band %d with spacing of %5.2f wavelength',n,x_o(end)));
xlabel('Angle (deg)')
ylabel('Array Pattern')
17-720
Array Synthesis for Lidar Systems
The figures shows a strong match between the desired pattern and the synthesized pattern between
20 degrees. Again, anything outside this angle range will be blocked by a reflector in the actual
system.
To verify the resulting weights and element spacing, we steer the array to 20 degrees azimuth at both
frequency bands #1 and #50 and examine if the array performance satisfies the application needs.
n = 50;
fc = freqVector(n);
wmag = x_o(1:N);
svang = [20;0];
azplot = -40:40;
Beam_d_plot = pattern(antarray,fc,azplot,0,'Type','efield','Weights',stv(fc,svang),'Normalize',fa
antpat_plot = pattern(ant,fc,azplot,0,'Type','efield','Normalize',false).';
weights_o = wmag.*steervec((-(N-1)/2:(N-1)/2)*x_o(end),svang);
Beam_syn_plot = abs(weights_o'*steervec((-(N-1)/2:(N-1)/2)*x_o(end),azplot).*antpat_plot);
plot(azplot,mag2db(Beam_d_plot),'-',azplot,mag2db(Beam_syn_plot),'--');
legend('Desired','Synthesized')
title(sprintf('Frequency band %d with spacing of %5.2f wavelength',n,x_o(end)));
xlabel('Angle (deg)')
ylabel('Array Pattern')
17-721
17 Featured Examples
snapnow;
n = 1;
fc = freqVector(n);
Beam_d_plot = pattern(antarray,fc,azplot,0,'Type','efield','Weights',stv(fc,svang),'Normalize',fa
antpat_plot = pattern(ant,fc,azplot,0,'Type','efield','Normalize',false).';
weights_o = wmag.*steervec((-(N-1)/2:(N-1)/2)*x_o(end),svang);
Beam_syn_plot = abs(weights_o'*steervec((-(N-1)/2:(N-1)/2)*x_o(end),azplot).*antpat_plot);
plot(azplot,mag2db(Beam_d_plot),'-',azplot,mag2db(Beam_syn_plot),'--');
legend('Desired','Synthesized')
title(sprintf('Frequency band %d with spacing of %5.2f wavelength',n,x_o(end)));
xlabel('Angle (deg)')
ylabel('Array Pattern')
snapnow;
17-722
Array Synthesis for Lidar Systems
Finally, the following figure shows the resulting weights from the optimization.
17-723
17 Featured Examples
Summary
This example shows how array synthesis techniques can be applied to help design a phased array
lidar to achieve a desired beam pattern.
17-724
Pedestrian and Bicyclist Classification Using Deep Learning
The movements of different parts of an object placed in front of a radar produce micro-Doppler
signatures that can be used to identify the object. This example uses a convolutional neural network
(CNN) to identify pedestrians and bicyclists based on their signatures.
This example trains the deep learning network using simulated data and then examines how the
network performs at classifying two cases of overlapping signatures.
The data used to train the network is generated using backscatterPedestrian and
backscatterBicyclist from Phased Array System Toolbox™. These functions simulate the radar
backscattering of signals reflected from pedestrians and bicyclists, respectively.
The helper function helperDopplerSignatures computes the short-time Fourier transform (STFT)
of a radar return to generate the micro-Doppler signature. To obtain the micro-Doppler signatures,
use the helper functions to apply the STFT and a preprocessing method to each signal.
[SPed,T,F] = helperDopplerSignatures(xPedRec,Tsamp);
[SBic,~,~] = helperDopplerSignatures(xBicRec,Tsamp);
[SCar,~,~] = helperDopplerSignatures(xCarRec,Tsamp);
Plot the time-frequency maps for the pedestrian, bicyclist, and car realizations.
% Plot the first realization of objects
figure
subplot(1,3,1)
imagesc(T,F,SPed(:,:,1))
ylabel('Frequency (Hz)')
title('Pedestrian')
axis square xy
subplot(1,3,2)
imagesc(T,F,SBic(:,:,1))
xlabel('Time (s)')
title('Bicyclist')
axis square xy
subplot(1,3,3)
17-725
17 Featured Examples
imagesc(T,F,SCar(:,:,1))
title('Car')
axis square xy
The normalized spectrograms (STFT absolute values) show that the three objects have quite distinct
signatures. Specifically, the spectrograms of the pedestrian and the bicyclist have rich micro-Doppler
signatures caused by the swing of arms and legs and the rotation of wheels, respectively. By contrast,
in this example, the car is modeled as a point target with rigid body, so the spectrogram of the car
shows that the short-term Doppler frequency shift varies little, indicating little micro-Doppler effect.
Combining Objects
Classifying a single realization as a pedestrian or bicyclist is relatively simple because the pedestrian
and bicyclist micro-Doppler signatures are dissimilar. However, classifying multiple overlapping
pedestrians or bicyclists, with the addition of Gaussian noise or car noise, is much more difficult.
If multiple objects exist in the detection region of the radar at the same time, the received radar
signal is a summation of the detection signals from all the objects. As an example, generate the
received radar signal for a pedestrian and bicyclist with Gaussian background noise.
% Configure Gaussian noise level at the receiver
rx = phased.ReceiverPreamp('Gain',25,'NoiseFigure',10);
xRadarRec = complex(zeros(size(xPedRec)));
for ii = 1:size(xPedRec,3)
xRadarRec(:,:,ii) = rx(xPedRec(:,:,ii) + xBicRec(:,:,ii));
end
17-726
Pedestrian and Bicyclist Classification Using Deep Learning
Then obtain micro-Doppler signatures of the received signal by using the STFT.
[S,~,~] = helperDopplerSignatures(xRadarRec,Tsamp);
figure
imagesc(T,F,S(:,:,1)) % Plot the first realization
axis xy
xlabel('Time (s)')
ylabel('Frequency (Hz)')
title('Spectrogram of a Pedestrian and a Bicyclist')
Because the pedestrian and bicyclist signatures overlap in time and frequency, differentiating
between the two objects is difficult.
In this example, you train a CNN by using data consisting of simulated realizations of objects with
varying properties—for example, bicyclists pedaling at different speeds and pedestrians with different
heights walking at different speeds. Assuming the radar is fixed at the origin, in one realization, one
object or multiple objects are uniformly distributed in a rectangular area of [5, 45] and [–10, 10]
meters along the X and Y axes, respectively.
17-727
17 Featured Examples
The other properties of the three objects that are randomly tuned are as follows:
1) Pedestrians
2) Bicyclists
17-728
Pedestrian and Bicyclist Classification Using Deep Learning
3) Cars
• Velocity — Uniformly distributed in the interval of [0, 10] meters/second along the X and Y
directions
Radar returns originate from different objects and different parts of objects. Depending on the
configuration, some returns are much stronger than others. Stronger returns tend to obscure weaker
ones. Logarithmic scaling augments the features by making return strengths comparable. Amplitude
normalization helps the CNN converge faster.
Download Data
The data for this example consists of 20,000 pedestrian, 20,000 bicyclist, and 12,500 car signals
generated by using the helper functions helperBackScatterSignals and
helperDopplerSignatures. The signals are divided into two data sets: one without car noise
samples and one with car noise samples.
For the first data set (without car noise), the pedestrian and bicyclist signals were combined,
Gaussian noise was added, and micro-Doppler signatures were computed to generate 5000 signatures
for each of the five scenes to be classified.
In each category, 80% of the signatures (that is, 4000 signatures) are reserved for the training data
set while 20% of the signatures (that is, 1000 signatures) are reserved for the test data set.
To generate the second data set (with car noise), the procedure for the first data set was followed,
except that car noise was added to 50% of the signatures. The proportion of signatures with and
without car noise is the same in the training and test data sets.
17-729
17 Featured Examples
Download and unzip the data in your temporary directory, whose location is specified by MATLAB®'s
tempdir command. Due to the large size of the dataset, this process may take several minutes. If you
have the data in a folder different from tempdir, change the directory name in the subsequent
instructions.
• trainDataNoCar.mat contains the training data set trainDataNoCar and its label set
trainLabelNoCar.
• testDataNoCar.mat contains the test data set testDataNoCar and its label set
testLabelNoCar.
• trainDataCarNoise.mat contains the training data set trainDataCarNoise and its label set
trainLabelCarNoise.
• testDataCarNoise.mat contains the test data set testDataCarNoise and its label set
testLabelCarNoise.
• TF.mat contains the time and frequency information for the micro-Doppler signatures.
Network Architecture
Create a CNN with five convolution layers and one fully connected layer. The first four convolution
layers are followed by a batch normalization layer, a rectified linear unit (ReLU) activation layer, and
a max pooling layer. In the last convolution layer, the max pooling layer is replaced by an average
pooling layer. The output layer is a classification layer after softmax activation. For network design
guidance, see “Deep Learning Tips and Tricks” (Deep Learning Toolbox).
layers = [
imageInputLayer([size(S,1),size(S,2),1],'Normalization','none')
convolution2dLayer(10,16,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(10,'Stride',2)
convolution2dLayer(5,32,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(10,'Stride',2)
convolution2dLayer(5,32,'Padding','same')
batchNormalizationLayer
reluLayer
maxPooling2dLayer(10,'Stride',2)
convolution2dLayer(5,32,'Padding','same')
17-730
Pedestrian and Bicyclist Classification Using Deep Learning
batchNormalizationLayer
reluLayer
maxPooling2dLayer(5,'Stride',2)
convolution2dLayer(5,32,'Padding','same')
batchNormalizationLayer
reluLayer
averagePooling2dLayer(2,'Stride',2)
fullyConnectedLayer(5)
softmaxLayer
classificationLayer]
layers =
24x1 Layer array with layers:
Specify the optimization solver and the hyperparameters to train the CNN using trainingOptions.
This example uses the ADAM optimizer and a mini-batch size of 128. Train the network using either a
CPU or GPU. Using a GPU requires Parallel Computing Toolbox™ and a CUDA® enabled NVIDIA®
GPU with compute capability 3.0 or higher. For information on other parameters, see
trainingOptions (Deep Learning Toolbox). This example uses a GPU for training.
17-731
17 Featured Examples
'Verbose',false, ...
'Plots','training-progress');
Load the data set without car noise and use the helper function helperPlotTrainData to plot one
example of each of the five categories in the training data set,
helperPlotTrainData(trainDataNoCar,trainLabelNoCar,T,F)
Train the CNN that you created. You can view the accuracy and loss during the training process. In
30 epochs, the training process achieves almost 95% accuracy.
trainedNetNoCar = trainNetwork(trainDataNoCar,trainLabelNoCar,layers,options);
17-732
Pedestrian and Bicyclist Classification Using Deep Learning
Use the trained network and the classify function to obtain the predicted labels for the test data
set testDataNoCar. The variable predTestLabel contains the network predictions. The network
achieves about 95% accuracy for the test data set without the car noise.
predTestLabel = classify(trainedNetNoCar,testDataNoCar);
testAccuracy = mean(predTestLabel == testLabelNoCar)
testAccuracy = 0.9530
Use a confusion matrix to view detailed information about prediction performance for each category.
The confusion matrix for the trained network shows that, in each category, the network predicts the
labels of the signals in the test data set with a high degree of accuracy.
figure
confusionchart(testLabelNoCar,predTestLabel);
17-733
17 Featured Examples
To analyze the effects of car noise, classify data containing car noise with the trainedNetNoCar
network, which was trained without car noise.
load(fullfile(tempdir,'PedBicCarData','testDataCarNoise.mat'))
Input the car-noise-corrupted test data set to the network. The prediction accuracy for the test data
set with the car noise drops significantly, to around 70%, because the network never saw training
samples containing car noise.
predTestLabel = classify(trainedNetNoCar,testDataCarNoise);
testAccuracy = mean(predTestLabel == testLabelCarNoise)
testAccuracy = 0.7176
The confusion matrix shows that most prediction errors occur when the network takes in scenes from
the "pedestrian," "pedestrian+pedestrian," or "pedestrian+bicyclist" classes and classifies them as
"bicyclist."
confusionchart(testLabelCarNoise,predTestLabel);
17-734
Pedestrian and Bicyclist Classification Using Deep Learning
Car noise significantly impedes the performance of the classifier. To solve this problem, train the CNN
using data that contains car noise.
load(fullfile(tempdir,'PedBicCarData','trainDataCarNoise.mat'))
Retrain the network by using the car-noise-corrupted training data set. In 30 epochs, the training
process achieves almost 90% accuracy.
trainedNetCarNoise = trainNetwork(trainDataCarNoise,trainLabelCarNoise,layers,options);
17-735
17 Featured Examples
Input the car-noise-corrupted test data set to the network trainedNetCarNoise. The prediction
accuracy is about 87%, which is approximately 15% higher than the performance of the network
trained without car noise samples.
predTestLabel = classify(trainedNetCarNoise,testDataCarNoise);
testAccuracy = mean(predTestLabel == testLabelCarNoise)
testAccuracy = 0.8728
The confusion matrix shows that the network trainedNetCarNoise performs much better at
predicting scenes with one pedestrian and scenes with two pedestrians.
confusionchart(testLabelCarNoise,predTestLabel);
17-736
Pedestrian and Bicyclist Classification Using Deep Learning
Case Study
To better understand the performance of the network, examine its performance in classifying
overlapping signatures. This section is just for illustration. Due to the non-deterministic behavior of
GPU training, you may not get the same classification results in this section when you rerun this
example.
For example, signature #4 of the car-noise-corrupted test data, which does not have car noise, has
two bicyclists with overlapping micro-Doppler signatures. The network correctly predicts that the
scene has two bicyclists.
k = 4;
imagesc(T,F,testDataCarNoise(:,:,:,k))
axis xy
xlabel('Time (s)')
ylabel('Frequency (Hz)')
title('Ground Truth: '+string(testLabelCarNoise(k))+', Prediction: '+string(predTestLabel(k)))
17-737
17 Featured Examples
From the plot, the signature appears to be from only one bicyclist. Load the data
CaseStudyData.mat of the two objects in the scene. The data contains return signals summed along
the fast time. Apply the STFT to each signal.
load CaseStudyData.mat
M = 200; % FFT window length
beta = 6; % window parameter
w = kaiser(M,beta); % kaiser window
R = floor(1.7*(M-1)/(beta+1)); % ROUGH estimate
noverlap = M-R; % overlap length
[Sc,F,T] = stft(x,1/Tsamp,'Window',w,'FFTLength',M*2,'OverlapLength',noverlap);
for ii = 1:2
subplot(1,2,ii)
imagesc(T,F,10*log10(abs(Sc(:,:,ii))))
xlabel('Time (s)')
ylabel('Frequency (Hz)')
title('Bicyclist')
axis square xy
title(['Bicyclist ' num2str(ii)])
c = colorbar;
c.Label.String = 'dB';
end
17-738
Pedestrian and Bicyclist Classification Using Deep Learning
The amplitudes of the Bicyclist 2 signature are much weaker than those of Bicyclist 1, and the
signatures of the two bicyclists overlap. When they overlap, the two signatures cannot be visually
distinguished. However, the neural network classifies the scene correctly.
Another case of interest is when the network confuses car noise with a bicyclist, as in signature #267
of the car-noise-corrupted test data:
figure
k = 267;
imagesc(T,F,testDataCarNoise(:,:,:,k))
axis xy
xlabel('Time (s)')
ylabel('Frequency (Hz)')
title('Ground Truth: '+string(testLabelCarNoise(k))+', Prediction: '+string(predTestLabel(k)))
17-739
17 Featured Examples
The signature of the bicyclist is weak compared to that of the car, and the signature has spikes from
the car noise. Because the signature of the car closely resembles that of a bicyclist pedaling or a
pedestrian walking at a low speed, and has little micro-Doppler effect, there is a high possibility that
the network will classify the scene incorrectly.
References
[1] Chen, V. C. The Micro-Doppler Effect in Radar. London: Artech House, 2011.
[2] Gurbuz, S. Z., and Amin, M. G. "Radar-Based Human-Motion Recognition with Deep Learning:
Promising Applications for Indoor Monitoring." IEEE Signal Processing Magazine. Vol. 36, Issue 4,
2019, pp. 16–28.
[3] Belgiovane, D., and C. C. Chen. "Micro-Doppler Characteristics of Pedestrians and Bicycles for
Automotive Radar Sensors at 77 GHz." In 11th European Conference on Antennas and Propagation
(EuCAP), 2912–2916. Paris: European Association on Antennas and Propagation, 2017.
[4] Angelov, A., A. Robertson, R. Murray-Smith, and F. Fioranelli. "Practical Classification of Different
Moving Targets Using Automotive Radar and Deep Neural Networks." IET Radar, Sonar &
Navigation. Vol. 12, Number 10, 2017, pp. 1082–1089.
[5] Parashar, K. N., M. C. Oveneke, M. Rykunov, H. Sahli, and A. Bourdoux. "Micro-Doppler Feature
Extraction Using Convolutional Auto-Encoders for Low Latency Target Classification." In 2017 IEEE
Radar Conference (RadarConf), 1739–1744. Seattle: IEEE, 2017.
17-740
FPGA Based Beamforming in Simulink: Part 2 - Code Generation
This tutorial uses HDL Coder™ to generate HDL code from the Simulink® model developed in part
one and verifies the HDL code using the HDL Verifier™. We use the HDL Verifier™ to generate a
cosimulation test bench model to verify the behavior of the automatically generated HDL code. The
test bench uses ModelSim® for cosimulation to verify the automatically generated HDL code.
The Phased Array System Toolbox™ Simulink blocks model operations on framed, floating-point data
and provides the behavioral reference model. We use this behavioral model to verify the results of the
implementation model and ultimately the automatically generated HDL code.
HDL Coder™ generates portable, synthesizable Verilog® and VHDL® code for over 300 Simulink
blocks that support HDL code generation. Those Simulink blocks operate on serial data using fixed-
point arithmetic with proper delays to enable pipelining by the synthesis tool.
HDL Verifier™ lets you test and verify Verilog® and VHDL® designs for FPGAs, ASICs, and SoCs.
We'll verify RTL generated from our Simulink model against a test bench running in Simulink® using
cosimulation with an HDL simulator.
Implementation Model
This tutorial assumes that you have a properly setup Simulink model that contains a subsystem with a
beamforming algorithm designed using Simulink blocks that use fixed-point arithmetic and support
HDL code generation. “FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design” on page 17-
748 shows how to create such a model.
Alternatively, if you start with a new model, you can run hdlsetup (HDL Coder) to configure the
Simulink model for HDL code generation. And, to configure the Simulink model for test bench
creation needed for verification, you must open Simulink's Model Settings, select Test Bench under
HDL Code Generation in the left panel, and check HDL test bench and Cosimulation model in the Test
Bench Generation Output properties group.
Run the model created in the “FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design” on
page 17-748 to display the results. You can run the Simulink model by clicking the Play button or
calling the sim command on the MATLAB command line as shown below. Use the Time Scope blocks
to compare the output frames visually.
modelname = 'SimulinkBeamformingHDLWorkflowExample';
open_system(modelname);
17-741
17 Featured Examples
sim(modelname);
17-742
FPGA Based Beamforming in Simulink: Part 2 - Code Generation
17-743
17 Featured Examples
Model Settings
Once you verify that your fixed-point, implementation model produces the same results as your
floating-point, behavioral model, you can generate HDL code and test bench. To do that, you must
first set the appropriate HDL Code Generation parameters in Simulink via the Configuration
Parameters dialog. For this example, we set the following parameters in Model Settings under HDL
Code Generation:
• Target: Xilinx Vivado synthesis tool; Virtex7 family; Device xc7vx485t; package ffg1761, speed -1;
and target frequency of 300 MHz.
• Optimization: Uncheck all optimizations except Balance delays
• Global Settings: Set the Reset type to Asynchronous
• Test Bench: Select HDL test bench and Cosimulation model
The reason we turn off optimizations is because some blocks used in our implementation are already
HDL-optimized blocks, which could conflict.
Once you've set Simulink's Model Settings, you can use HDL Coder™ to generate HDL code for the
HDL Algorithm subsystem. (For an example, see “Generate HDL Code from Simulink Model” (HDL
Coder).) Use HDL Verifier™ to generate a “SystemVerilog DPI Test Bench” (HDL Coder) test bench
model.
% Uncomment these two lines to generate HDL code and test bench.
% makehdl([modelname '/HDL Algorithm']); % Generate HDL code
% makehdltb([modelname '/HDL Algorithm']); % Generate Cosimulation test bench
Notice that when you execute the makehdl command, information is displayed in the MATLAB
command window. In that information is the amount of delay added during the automatic code
generation process. In this case 24 delays are added which results in an extra delay of 24*1ms =
24ms. This delay will be noticed when looking at our final results which have a total delay of 79ms.
Also, because of this extra delay added during automatic code generation the output of our floating-
point, behavioral model needs to be delay balanced by adding 24 delays to the original 55. This will
align the output of the behavioral model with the implementation model as well as the cosimulation
output.
After generating the HDL code and test bench a new Simulink model named gm_<modelname>_mq
containing a ModelSim® block is created in your working directory, which looks like this:
17-744
FPGA Based Beamforming in Simulink: Part 2 - Code Generation
% Uncomment the following two lines to open the test bench model.
% modelname = ['gm_',modelname,'_mq'];
% open_system(modelname);
At this point, you might want to change the delay setting in HDL Latency block to 79 to account for
the 24 delays added by the code generation process. Using a delay of 79 will ensure that the
behavioral model output is time-aligned with the output of the implementation and cosimulation
output.
The following steps will launch ModelSim; therefore, make sure that the command to start ModelSim,
vsim, is on the path of the machine you're on.
To run the cosimulation model, first double-click the blue rectangular box in the upper-left corner of
the Simulink test model to launch ModelSim.
Run the Simulink test bench model to display the simulation results. You can run the Simulink model
by clicking the Play button or calling the sim command on the MATLAB command line as shown
below. The test bench model includes Time Scope blocks to compare the output of the cosimulation
performed with ModelSim with the output of the HDL subsystem in Simulink.
% Uncomment the following line, if ModelSim is installed, to run the test bench.
% sim(modelname);
After starting ModelSim, running the Simulink test bench model will populate the Questa Sim with
the HDL model's waveforms and Time Scopes in Simulink. Below are examples of the results in
Questa Sim and Simulink scopes.
17-745
17 Featured Examples
NOTE: You must restart Questa Sim each time you want to run the Simulink simulation. You can do
that by executing "restart" at the Questa Sim command line. Alternatively, you can quit Questa Sim
and re-launch it by double-clicking the blue box in the upper-left corner of the Simulink test bench
model.
The Simulink scope below shows both the cosimulation and HDL model (DUT) producing a 79ms
delayed version of the original signal produced by the behavioral model, as expected, with no
difference between the two waveforms. The 79ms delay is due to the original 55ms delay added in the
HDL Algorithm subsystem to enable pipelining by the synthesis tool and an additional 24ms delay due
to the delay balancing that's done during the automatic HDL code generation. The additional 24
delays added is reported during the code generation step above.
17-746
FPGA Based Beamforming in Simulink: Part 2 - Code Generation
The Simulink scopes comparing the results of the cosimulation can be found in test bench model
inside the Compare subsystem, which is at the output of the HDL Algorithm_mq subsystem.
% Uncomment the following line to open the subsystem with the scopes.
% open_system([modelname,'/Compare/Assert_beamformingOutHDL'])
Summary
This example is the second of a two-part tutorial series on how to automatically generate HDL code
for a fixed-point, sample-based beamforming algorithm and verify the generated code in Simulink.
The first part of the tutorial “FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design” on
page 17-748 shows how to develop an algorithm in Simulink suitable for implementation on an
FPGA. This example showed how to setup a model to generate the HDL code and a cosimulation test
bench for a Simulink subsystem created with blocks that support HDL code generation. It showed
how to setup and launch ModelSim to cosimulate the HDL code and compare its output to the output
generated by the HDL implementation model.
17-747
17 Featured Examples
The Phased Array System Toolbox™ is used to design and verify the floating-point functional
algorithm, which provides the behavioral reference model. The behavioral model is then used to
verify the results of the fixed-point, implementation model used to generate HDL code.
Fixed-Point Designer™ provides data types and tools for developing fixed-point and single-precision
algorithms to optimize performance on embedded hardware. You can perform bit-true simulations to
observe the impact of limited range and precision without implementing the design on hardware.
There are three key modeling concepts to keep in mind when preparing a Simulink® model to target
FPGAs:
Beamforming Algorithm
In this example we use a Phase-Shift Beamformer as the behavioral algorithm, which is re-
implemented in the HDL Algorithm subsystem using Simulink blocks that support HDL code
generation. The beamformer's job is to calculate the phase required between each of the ten channels
to maximize the received signal power in the direction of the incident angle. Below is the Simulink
model with the behavioral algorithm and its corresponding implementation algorithm for an FPGA.
modelname = 'SimulinkBeamformingHDLWorkflowExample';
open_system(modelname);
17-748
FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design
The Simulink model has two branches. The top branch is the behavioral, floating-point model of our
algorithm and the bottom branch is the functionally equivalent fixed-point version using blocks that
support HDL code generation. Besides plotting the output of both branches to compare the two, we
also calculate and plot the difference, or error, between both outputs.
Notice that there's a delay ( ) at the output of the behavioral model. This is necessary because the
implementation algorithm uses 55 delays to enable pipelining which creates latency that needs to be
accounted for. Accounting for this latency is called delay balancing and is necessary to time-align the
output between the behavioral model and the implementation model to make it easier to compare the
results.
To synthesize a received signal at the phased array antenna, the model includes a subsystem that
generates a multi-channel signal. The Baseband Multi-channel Signal subsystem models a
transmitted waveform and the received target echo at the incident angle captured via a 10-element
antenna array. The subsystem also includes a receiver pre-amp model to account for receiver noise.
This subsystem generates the input stimulus for our behavioral and implementation models.
The model includes a Serialization & Quantization subsystem which converts floating-point, frame-
based signals to fixed-point, sample-based signals necessary for modeling streaming data in
17-749
17 Featured Examples
hardware. Sample-based processing was chosen because our system will run slower than 400 MHz;
therefore, we're optimizing for resources instead of throughput.
The input signal to the serialization subsystem has 10 channels with 300 samples per channel or a
300x10 size signal. The subsystem serializes, or unbuffers, the signal producing a sample-based
signal that's 1x10, i.e., one sample per channel, which is then quantized to meet the requirements of
our system.
which is a signed, 12-bit word length, and 19-bit fraction length precision. This precision was chosen
because we're targeting a Xilinx® Virtex®-7 FPGA which is connected to a 12-bit ADC. The fraction
length was chosen to accommodate for the maximum range of the input signal.
The HDL Algorithm subsystem, which is targeted for HDL code generation, implements the
beamformer, which was designed using Simulink blocks that support HDL code generation.
The Angle2SteeringVec subsystem calculates the signal delay at each antenna element of a Uniform
Linear Array (ULA). The delay is then fed to a multiply and accumulate (MAC) subsystem to perform
beamforming.
The algorithm in the HDL Algorithm subsystem is functionally equivalent to the phase-shift
beamforming behavioral algorithm but can generate HDL code. There are three main differences that
enables this subsystem to generate efficient HDL code:
17-750
FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design
To ensure proper clock timing, any delay added to one branch of the implementation model must be
matched to all other parallel branches as seen above. The Angle2SteeringVec subsystem, for example,
added 36 delays; therefore, the top branch of the HDL Algorithm subsystem includes a delay of 36
samples right before the MAC subsystem. Likewise, the MAC subsystem used 19 delays, which must
be balanced by adding 19 delays to the output of the Angle2SteeringVec subsystem. Let's look inside
the MAC subsystem to account for the 19 delays.
% Open the MAC subsystem.
open_system([modelname '/HDL Algorithm/MAC']);
set_param(modelname,'SimulationCommand','update')
Looking at the very bottom branch of the MAC subsystem, we see a , followed by the complex
multiply block which contains a , then there's a , followed by 4 delay blocks of for a total
of 19 delays. The delay values are defined in the PreLoadFcn callback in Model Properties.
The Angle2SteeringVec subsystem breaks the task into a few steps to calculate the steering vector
from the signal's angle of arrival. It first calculates the signal's arrival delay at each sensor by matrix
multiplying the antenna element position in the array by the signal's incident direction. The delays
are then fed to the SinCos subsystem which calculates the trigonometric functions sine and cosine
using the simple and efficient CORDIC algorithm.
% Open the Angle2SteeringVec subsystem.
open_system([modelname '/HDL Algorithm/Angle2SteeringVec']);
Because our design consists of a 10 element ULA spaced at half-wavelength, the antenna element
position is based on the spacing between each antenna element measured outwardly from the center
17-751
17 Featured Examples
of the antenna array. We can define the spacing between elements as a vector of 10 numbers ranging
from -6.7453 to 6.7453, i.e., with a spacing of 1/2 wavelength, which is 2.99/2. Given that we're using
fixed-point arithmetic, the data type used for the element spacing vector is fixdt(1,8,4), i.e., a signed
8-bit word length and 4-bit fraction length numeric data type.
Deserialization
To compare your sample-based, fixed-point, implementation design with the floating-point, frame-
based, behavioral design you need to deserialize the output of the implementation subsystem and
convert it to a floating-point data type. Alternatively, you can compare the results directly with
sample-based signals but then you must unbuffer the output of the behavioral model as shown:
to match the sample-based signal output from the implementation algorithm. In this case, you only
need to convert the output of the HDL Algorithm subsystem to floating-point by setting the Data Type
Conversion block's output data type to double.
Run the model to display the results. You can run the Simulink model by clicking the Play button or
calling the sim command in the MATLAB command line. Use the scopes to compare the outputs
visually.
sim(modelname);
17-752
FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design
17-753
17 Featured Examples
As seen in the Time Scope showing the Beamformed Signal and Beamformed Signal (HDL), the two
signals are nearly identical. We can see the error on the order of 10^-3 in the Error scope. This
shows that the HDL Algorithm subsystem is producing the same results as the behavioral model
within quantization error. This is an important first step before generating HDL code.
Because the HDL model used 55 delays, the scope titled HDL Beamformed Signal is delayed by 55ms
when compared to the original transmitted or beamformed signal shown on the Behavioral
Beamformed Signal scope.
Summary
This example is the first of a two-part tutorial series on how to design an FPGA implementation-ready
algorithm, automatically generate HDL code, and verify the HDL code in Simulink. This example
showed how to use blocks from the Phased Array System Toolbox to create a behavioral model, to
serve as a golden reference, and how to create a subsystem for implementation using Simulink blocks
that support HDL code generation. It also compared the output of the implementation model to the
output of the corresponding behavioral model to verify that the two algorithms are functionally
equivalent.
Once you verify that your implementation algorithm is functionally equivalent to your golden
reference, you can use HDL Coder™ for “HDL Code Generation from Simulink” (HDL Coder) and
HDL Verifier™ to “Generate a Cosimulation Model” (HDL Coder) test bench.
The second part of this two-part tutorial series “FPGA Based Beamforming in Simulink: Part 2 - Code
Generation” on page 17-741 shows how to generate HDL code from the implementation model and
17-754
FPGA Based Beamforming in Simulink: Part 1 - Algorithm Design
verify that the generated HDL code produces the same results as the floating-point behavioral model
as well as the fixed-point implementation model.
17-755
17 Featured Examples
17-756
Squinted Spotlight Synthetic Aperture Radar (SAR) Image Formation
Radar Configuration
Consider an airborne SAR operating in C-band with a 4 GHz carrier frequency and a signal bandwidth
of 50 MHz. This bandwidth yields a range resolution of 3 meters. The radar system collects data at a
squint angle of 33 degrees from the broadside as shown in the figure above. The delay corresponds in
general to the slant range between the target and the platform. For a SAR system, the slant range
will vary over time as the platform traverses a path orthogonal to the direction of antenna beam. This
section below focuses on defining the parameters for the transmission waveform. The LFM sweep
bandwidth can be decided based on the desired range resolution.
c = physconst('LightSpeed');
17-757
17 Featured Examples
The signal bandwidth is a parameter derived from the desired range resolution.
bw = c/(2*rangeResolution);
prf = 1000;% Hz
aperture = 4;% sq. meters
tpd = 3*10^-6; % sec
fs = 120*10^6; % Hz
Assume the speed of aircraft is 100 m/s with a flight duration of 4 seconds.
maxRange = 2500;
truncrangesamples = ceil((2*maxRange/c)*fs);
fastTime = (0:1/fs:(truncrangesamples-1)/fs);
% Set the reference range for the cross-range processing.
Rc = 1e3;% meters
Configure the SAR transmitter and receiver. The antenna looks in the broadside direction orthogonal
to the flight direction.
Scene Configuration
In this example, two static point targets are configured at locations specified below. The entire scene
as shown further in the simulation lies ahead of the platform. The data collection ends before the
airborne platform is abreast of the target location. All targets have a mean RCS value of 1 meter-
squared.
17-758
Squinted Spotlight Synthetic Aperture Radar (SAR) Image Formation
targetpos= [900,0,0;1000,-30,0]';
targetvel = [0,0,0;0,0,0]';
The squint angle calculation depends on the flight path and center of the target scene which is
located at nearly 950 meters in this case.
squintangle = atand(600/950);
target = phased.RadarTarget('OperatingFrequency', fc, 'MeanRCS', [1,1]);
pointTargets = phased.Platform('InitialPosition', targetpos,'Velocity',targetvel);
% The figure below describes the ground truth based on the target
% locations.
figure(1);
h = axes;plot(targetpos(2,1),targetpos(1,1),'*b');hold all;plot(targetpos(2,2),targetpos(1,2),'*r
set(h,'Ydir','reverse');xlim([-50 10]);ylim([800 1200]);
title('Ground Truth');ylabel('Range');xlabel('Cross-Range');
The following section describes how the system operates based on the above configuration.
Specifically, the section below shows how the data collection is performed for a SAR platform. As the
platform moves in the cross-range direction, pulses are transmitted and received in directions defined
by the squint angle with respect to the flight path. A collection of pulses gives the phase history of the
targets lying in the illumination region as the platform moves. The longer the target lies in the
illumination region, the better the cross-range resolution for the entire image because the process of
range and cross-range focusing is generalized for the entire scene.
17-759
17 Featured Examples
rxsig = zeros(truncrangesamples,numpulses);
for ii = 1:numpulses
% Update radar platform and target position
[radarpos, radarvel] = radarPlatform(slowTime);
[targetpos,targetvel] = pointTargets(slowTime);
end
kc = (2*pi*fc)/c;
% Compensate for the doppler due to the squint angle
rxsig=rxsig.*exp(-1i.*2*(kc)*sin(deg2rad(squintangle))*repmat(speed*eta1,1,truncrangesamples)).';
The received signal can now be visualized as a collection of multiple pulses transmitted in the cross-
range direction. The plots show the real part of the signal for the two targets. The chirps appear
tilted due to the squint angle of the antenna.
17-760
Squinted Spotlight Synthetic Aperture Radar (SAR) Image Formation
The range compression will help achieve the desired range resolution for a bandwidth of 50 MHz.
The figure below shows the response after range compression has been achieved on the received
signal. The phase histories of the two targets are clearly visible along the cross-range direction and
range focusing has been achieved.
17-761
17 Featured Examples
Azimuth Compression
There are multiple techniques to process the cross-range data and get the final image from the SAR
raw data once range compression has been achieved. In essence, the range compression helps
achieve resolution in the fast-time or range direction and the resolution in the cross-range direction is
achieved by azimuth or cross-range compression. Range Migration algorithm for squint case has been
demonstrated in this example. The azimuth focusing needs to account for the squint induced due to
antenna tilt.
rma_processed = helperSquintRangeMigration(cdata,fastTime,fc,fs,prf,speed,numpulses,c,Rc,squintan
Plot the focused SAR image using the range migration algorithm. Only a section of the image formed
via the range migration algorithm is shown to accurately point the location of the targets. The range
migration as shown by [1], [2] and [3] provides theoretical resolution in the cross-track as well as
along-track direction.
figure(2);
imagesc(abs(rma_processed(2300:3600,1100:1400).'));
title('SAR Data focused using Range Migration algorithm ')
xlabel('Cross-Range Samples')
ylabel('Range Samples')
17-762
Squinted Spotlight Synthetic Aperture Radar (SAR) Image Formation
Summary
This example shows how to simulate and develop Squint mode Spotlight SAR processing leveraging a
LFM signal in an airborne data collection scenario. The example also demonstrates image generation
from the received signal via modified range migration algorithm to handle the effect due to squint.
References
1 Cafforio, C., Prati, C. and Rocca, F., 1991. SAR data focusing using seismic migration techniques.
IEEE transactions on aerospace and electronic systems, 27(2), pp.194-207.
2 Soumekh, M., 1999. Synthetic Aperture Radar Signal Processing with MATLAB algorithms. John
Wiley & Sons, Inc.
3 Stolt, R.H., Migration by Fourier transform techniques, Geophysics, 1978, 43, pp. 23-48
Appendix
This function demonstrates the range migration algorithm for imaging the side looking synthetic
aperture radar.The pulsed compressed synthetic aperture data is considered in this algorithm.
17-763
17 Featured Examples
kaz = 2*pi*linspace(-prf/2,prf/2,numPulses)./speed;
Generate a matrix of the cross-range wavenumbers to match the size of the received two-dimensional
SAR signal
kc = 2*pi*fc/3e8;
kazimuth = kaz.';
kus=2*(kc)*sin(deg2rad(squintangle));
kx = krange.^2-(kazimuth+kus).^2;
The wavenumber has been modified to accommodate shift due to squint and achieve azimuth
focusing.
thetaRc = deg2rad(squintangle);
kx = sqrt(kx.*(kx > 0));
kFinal = exp(1i*(kx.*cos(thetaRc)+(kazimuth).*sin(thetaRc)).*Rc);
kfin = kx.*cos(thetaRc)+(kazimuth+kus).*sin(thetaRc);
sdata =fftshift(fft(fftshift(fft(sigData,[],1),1),[],2),2);
Perform bulk compression to get the azimuth compression at the reference range. Perform filtering of
the 2-D FFT signal with the new cross-range wavenumber to achieve complete focusing at the
reference range and as a by-product, partial focusing of targets not lying at the reference range.
fsmPol = (sdata.').*kFinal;
Perform Stolt Interpolation to achieve focusing for targets that are not lying at the reference range
stoltPol = fsmPol;
for i = 1:size((fsmPol),1)
stoltPol(i,:) = interp1(kfin(i,:),fsmPol(i,:),krange(1,:));
end
stoltPol(isnan(stoltPol)) = 1e-30;
azcompresseddata = ifftshift(ifft2(stoltPol),2);
end
17-764
FPGA Based Monopulse Technique Workflow: Design and Code Generation
The Phased Array System Toolbox™ provides the floating-point behavioral model for the monopulse
technique via a phased.MonopulseFeed System object. This behavioral model is used to verify the
results of the implementation model and the automatically generated HDL code as well. DSP System
Toolbox™ provides the FIR filter essential for the down-conversion filtering.
Fixed-Point Designer™ provides data types and tools for developing fixed-point and single-precision
algorithms to optimize performance on embedded hardware. Bit-true simulations can be performed to
observe the impact of limited range and precision without implementing the design on hardware.
This example uses HDL Coder™ to generate HDL code from the Simulink® model developed and
verifies the HDL code using the HDL Verifier™. HDL Verifier™ is used to generate a cosimulation test
bench model to verify the behavior of the automatically generated HDL code. The test bench uses
ModelSim® for cosimulation to verify the automatically generated HDL code.
Monopulse is a technique where the received echoes from different elements of an antenna are used
to estimate the direction of arrival (DOA) of a signal. This in turn helps estimate the location of an
object. The example uses DSP System Toolbox and Fixed-Point Designer to design the module. This
technique utilizes four beams which help measure the angular position of the target. All the four
beams are generated simultaneously and the difference of azimuth and elevation is achieved in a
single pulse, hence, the name monopulse.
The algorithm is implemented by utilizing Simulink® blocks that are HDL compatible. The model
shown below assumes that the signal is received from the 4-element uniform rectangular array
(URA). Hence the starting point for the model shows 4 sinusoids as inputs. Assuming a 4-element
URA, the model is comprised of 4 receive channels from each of the elements of the URA. Once the
signals are converted to the digital domain, DDC blocks will ensure that the frequency of the received
signal is lowered therefore reducing the sample rate for processing. The block diagram below shows
the subsystem which consists of the following modules.
modelname = 'SimulinkDDCMonopulseHDLWorkflowExample';
open_system(modelname);
17-765
17 Featured Examples
open_system(modelname);
set(allchild(0),'Visible','off');
The Simulink model has two branches. The top branch is the behavioral, floating-point model of the
monopulse technique and digital down conversion chain algorithm and the bottom branch is the
functionally equivalent fixed-point version using blocks that support HDL code generation. Apart from
plotting the output of both branches to compare the two, the difference, or error, between sum
channel of both the outputs have also been calculated and plotted.
Notice that there's a delay ( ) at the output of the behavioral model. This is necessary because
the implementation algorithm uses 220 delays to enable pipelining which creates latency that needs
to be accounted for. This latency is necessary to time-align the output between the behavioral model
and the implementation model.
The subsystem below shows how the received signal at sampled at 80 MHz and nearly 15 MHz
carrier frequency is down converted to baseband via the DDC and then passed on to the monopulse
sum and difference subsystem. A DDC module is a combination of a numerically controlled oscillator
(NCO) and a set of low-pass filters. NCO provides the signal to mix and demodulate the incoming
signal. Open the subsystem that performs the down-conversion
17-766
FPGA Based Monopulse Technique Workflow: Design and Code Generation
Notice that a delay of 215 ms has been added to the output of digital comparator in the
implementation subsystem to compensate for the delay that arises out of the down conversion chain.
A DDC also contains a set of low-pass filters as shown in the figure. Once mixed, the low-pass filtering
of the mixed signal is required to eliminate the high frequency components. In this example, we use a
cascaded filter chain to achieve the low-pass filtering. The NCO is used to generate the high accuracy
sinusoid for the mixer. A latency of 6 is provided to the HDL-optimized NCO block. This signal is
mixed with the incoming signal and is converted from a higher frequency to a relatively lower
frequency as it progresses through the various stages.
In this example, the incoming signal has a carrier frequency of 15 MHz and is sampled at 80 MHz .
The down-conversion process brings the sampled signal down to a few kHz. The coefficients for
relevant low-pass FIR filters are designed using filterBuilder, one of which is as described below. The
values must be chosen to satisfy the required pass-band criteria.
17-767
17 Featured Examples
Once generated, the coefficients can be exported to the HDL optimized FIR Filter block.
Apart from generating down-converted signal, another aspect for consideration for monopulse is the
steering vector for different elements. The steering vectors have been generated for an incident angle
of azimuth 30 degrees and elevation 20 degrees. The steering vectors are passed on to the digital
comparator to provide the desired sum and difference channel outputs. The down-converted signal is
then multiplied by the conjugate of these vectors as shown in the figure below. By processing the sum
and difference channels, the DOA of the received signal can be found. The digital comparator that
compares the steering vectors for the different elements of the antenna array can be seen by:
17-768
FPGA Based Monopulse Technique Workflow: Design and Code Generation
In the figure above, the digital comparator takes the steering vectors and computes the sum and
difference of the different steering vectors sVA, sVB, sVC and sVD respectively. You can also calculate
the steering vectors by using the phased.SteeringVector System object or you can generate them
using method similar to the one shown in the “FPGA Based Beamforming in Simulink: Part 1 -
Algorithm Design” on page 17-748. Once the sum and difference of various steering vectors
corresponding to each element of the array has been done, the calculation of sum and difference
channels for corresponding azimuth and elevation angles are performed. From the Sum and
Difference Monopulse subsystem, 3 signals are obtained, namely the sum, the azimuth difference, and
the elevation difference. The entire arithmetic is performed in fixed point. The monopulse sum and
difference channel subsystem can be opened by using the command below
17-769
17 Featured Examples
To compare results of the implementation model to the behavioral model, run the model created to
display the results. You can run the Simulink model by clicking the Play button or calling the sim
command on the MATLAB command line as shown below. Use Scope blocks to compare the output
frames.
sim(modelname);
17-770
FPGA Based Monopulse Technique Workflow: Design and Code Generation
17-771
17 Featured Examples
17-772
FPGA Based Monopulse Technique Workflow: Design and Code Generation
17-773
17 Featured Examples
The plots show the output from sum and difference channels. These channels can be fed to an
estimator to indicate the angle/direction of the object.
This section covers the procedure to generate HDL code for a DDC and monopulse technique and
verify that the generated code is functionally correct. The behavioral model provides the reference
values to ensure that the output from HDL is within tolerance limits. Based on the Simulink model
setup as described above, the monopulse technique designed using fixed-point arithmetic and
supports HDL code generation. Alternatively, if you start with a new model, you can run hdlsetup
(HDL Coder) to configure the Simulink model for HDL code generation. To configure the Simulink
model for test bench creation, open Simulink's Model Settings, select Test Bench under HDL Code
Generation in the left panel, and check HDL test bench and Cosimulation model in the Test Bench
Generation Output properties group.
Model Settings
After the fixed-point implementation is verified and the implementation model produces the same
results as your floating-point, behavioral model, you can generate HDL code and test bench. For code
17-774
FPGA Based Monopulse Technique Workflow: Design and Code Generation
generation and test bench, set the HDL Code Generation parameters in the Configuration Parameters
dialog. The following parameters in Model Settings are set under HDL Code Generation:
• Target: Xilinx Vivado synthesis tool; Virtex7 family; Device xc7vx485t; package ffg1761, speed -1;
and target frequency of 300 MHz.
• Optimization: Uncheck all optimizations
• Global Settings: Set the Reset type to Asynchronous
• Test Bench: Select HDL test bench, Cosimulation model and SystemVerilog DPI test bench
After Simulink Model Settings have been updated, you can use HDL Coder Generate HDL Code from
Simulink® to generate HDL code for the HDL Algorithm subsystem. Use HDL Verifier to generate
test bench model.
% Uncomment the following two lines to generate HDL code and test bench.
% makehdl([modelname '/DDC and Monopulse HDL']); % Generate HDL code
% makehdltb([modelname '/DDC and Monopulse HDL ']); % Generate Cosimulation test bench
Since the model has accounted for pipelining in the multiplications, and we have unchecked all
optimizations, there are no extra delays added to the model. We need to compensate these delays for
the floating-point, behavioral model output. This will align the output of the behavioral model with the
implementation model as well as the cosimulation. Out of the 220 units delay, 215 unit delays
compensates for the latency in the DDC chain and 5 units in the monopulse sum and difference
subsystem.
After generating HDL code and test bench a new Simulink model named gm_<modelname>_mq
containing a ModelSim Simulator block is created in your working directory, which looks like this:
17-775
17 Featured Examples
% To open the test bench model, uncomment the following lines of code
% modelname = ['gm_',modelname,'_mq'];
% open_system(modelname);
Launch ModelSim and run the cosimulation model to display the simulation results. You can click on
the Play button on the top of Simulink canvas to run the test bench or you can do it via command
window from the code below
The Simulink® test bench model will populate the Questa Sim with the HDL model's signal and Time
Scopes in Simulink. Below are examples of the results in Questa Sim and Simulink scopes.
17-776
FPGA Based Monopulse Technique Workflow: Design and Code Generation
The Simulink scope below shows real and imaginary parts for both the cosimulation and Design
Under Test(DUT) as well as the error between them. The scopes comparing the results of the
cosimulation can be found in test bench model inside the Compare subsystem, which is at the output
of the DDC and Monopulse HDL_mq subsystem.
% Uncomment the following line to open the subsystem with the scopes.
% open_system([modelname,'/Compare/Assert_Sum Channel HDL'])
17-777
17 Featured Examples
Summary
This example demonstrated how to design a Simulink model for a DDC and monopulse feed system,
verify the resutls with an equivalent behavioral setup from the Phased Array System Toolbox. This
example demonstrated how to automatically generate HDL code for a fixed-point, monopulse
technique with the down conversion chain and verify the generated code in Simulink®. The
generated HDL code as well as a cosimulation test bench for the Simulink subsystem was created
with blocks that support HDL code generation. It showed how to setup and launch ModelSim to
cosimulate the HDL code and compare its output to the output generated by the HDL implementation
model. The cosimulation is performed via ModelSim for the HDL code and compare results to the
output generated by the HDL model.
17-778