Sar
Sar
Final Year Project 2007-08 Group Members Misbah Ahmad Mussawar Farrukh Rashid Furqan Ahmed Usman Tahir Mir Supervisor Mr. Kashif Siddiq
Submission
This report is being submitted to the Department of Telecom Engineering of the National University of Computer and Emerging Sciences in partial fulfillment of the requirements for the degree of BE in Telecom Engineering
Declaration
We hereby declare that the work presented in this report is our own and has not been presented previously to any other institution or organization
____________________ Supervisor
Abstract
A Synthetic Aperture Radar (SAR) is used for all-weather and all-time high resolution aerial and space based imaging of terrain. Being independent of light and weather conditions, SAR imaging has an advantage over optical imaging. Some of the SAR applications include Surveillance, Targeting, 3D Imaging, Navigation & Guidance, Moving Target Indication, and Environmental Monitoring. This project aimed at the System-Level Design, Modeling and Simulation of a Synthetic Aperture Radar System and the Implementation of the signal processor for SAR using a TI C6416 DSP. The system parameters have been specified in view of all the constraints and practical limitations. The performance metrics of the system such as range resolution and cross-range resolution, etc have been worked out and the system level specification has been worked out keeping in view the desired performance. Using MATLAB a as major tool, the specified system parameters have been tested for their accuracy and correctness. A simulation of Pulse Doppler radar is completed which includes waveform design, target modeling, LFM pulse compression, side lobe control and threshold detection. A SAR image formation algorithm(Doppler Beam Sharpening) have been implemented in MATLAB.
Table of Contents
Introduction __________________________________________________________ 10
1.1 Synthetic Aperture Radar ________________________________________________10
1.1.1 Why to use SAR ____________________________________________________________ 10 1.1.2 Applications of SAR _________________________________________________________ 11
Background __________________________________________________________ 14
2.1 Introduction____________________________________________________________14
2.1.1 What is Radar ______________________________________________________________ 14
Fast Fourier Transform_________________________________________________ 64 Decibel Arithmetic _____________________________________________________ 67 Speed Unit Conversions _________________________________________________ 68
List of Figures
Figure 1.1 Comparison of SAR and Optical Image (Day) Figure 1.2 Comparison of SAR and Optical Image (Night)
Figure 2.1 A Basic Pulsed Radar System Figure 2.2 PRF & PRI Figure 2.3 Spectra of transmitted and received waveforms, and Doppler bank Figure 2.4 Spherical coordinate system for radar measurements Figure 2.5 A Single Range Sample Figure 2.6 A Row Containing Fast Time Samples Figure 2.7 A Slow / Fast Time Matrix Figure 2.8 A Datacube Figure 2.9 The Radar Datacube Figure 2.10 Processing of the Datacube Figure 2.11 A 2D Matrix as a part of Datacube Figure 2.12 An LFM Waveform as a function of time Figure 2.13 Up-chirp and Down-chirp Figure 2.14 Cross-Range Resolution using Real Beam Radar Figure 2.15 Collecting Highr Cross-Range Resolution Samples Figure 2.16 Usable Frequency vs. Aperture Time (Cross-Range Resolutions 1m and 5m) Figure 2.17 Maximum Synthetic Aperture Size Figure 2.18 Stripmap SAR Data Collection Figure 2.19 Swath Length Figure 2.20 Doppler Bandwidth Figure 2.21 From Point Spread Response to the final Image Figure 2.22 Block diagram of DBS Algorithm Figure 3.1 Three LFM Pulses with different BT Products
Figure 3.2 Matched filter outputs of simple pulse and LFM pulse Figure 3.3 Frequency Domain Windowing Result Figure 3.4 Time Domain Windowing Result Figure 3.5 GUI for Pulse Compression Simulation Figure 3.6 Transmitted LFM Pulse (Time and Frequency Domain) Figure 3.7 Matched Filter Output for 3 Scatterers (Time-Mapped) Figure 3.8 Matched Filter Output for 3 Scatterers (Range-Mapped) Figure 3.9 Matched Filter Output for 3 Scatterers After Frequency Domain Windowing Figure 3.10 Matched Filter Output for 3 Scatteres After Time Domain Windowing Figure 3.11 Function to be executed before simulation Figure 3.12 The overall system Figure 3.13 Pulse Transmitter of the radar system Figure 3.14 The transmitted signal Figure 3.15 Target model Figure 3.16 Setting the delay by the target Figure 3.17 The returned waveform from the target Figure 3.18 Setting the variables in range equation Figure 3.19 Computing the received power from range equation Figure 3.20 The signal processor model Figure 3.21 Input and Output of the receiver matched filter Figure 3.22 Uncompressed return (bottom) and compressed return (top) Figure 3.23 The function to be executed at the end of simulation Figure 3.24 Graph displaying the range and Doppler of the target Figure 3.25 GUI for SAR Parameter Calculator Figure 3.26 GUI for DBS Simulation Figure 3.27 Generated Dataset for 4 Scatterers Figure 3.28 After Applying Pulse Compression to the Dataset
Figure 3.29 Before Dechirping, Azimuth compressed Figure 3.30 The DBS Image after Dechirping, Azimuth Compressed Figure 3.31 The Image after Axis mapped
Chapter
1
Introduction
1.1 Synthetic Aperture Radar
Synthetic Aperture Radar (SAR) is a type of radar which is used for all-weather and all-time high resolution aerial and space based imaging of terrain. The term all-weather means that an image can be acquired in any weather conditions like clouds, fog or precipitation etc. and the term alltime means that an image can be acquired during day as well as night.
Figure 1.1 Comparison of SAR and Optical Image (Day) (Source: http://www.sandia.gov/RADAR/sar_sub/images/)
During night, the SAR image with a resolution of 3m (left) and the optical image (right) look like:
10
Figure 1.2 Comparison of SAR and Optical Image (Night) (Source: http://www.sandia.gov/RADAR/sar_sub/images/)
Hence, we can see that the optical image does not provide any information if taken at night but SAR image is still same as that taken in the day time.
11
Treaty Verification and Nonproliferation The ability to monitor other nations for treaty observance and for the nonproliferation of nuclear, chemical, and biological weapons is increasingly critical. Often, monitoring is possible only at specific times, when overflights are allowed, or it is necessary to maintain a monitoring capability in inclement weather or at night, to ensure an adversary is not using these conditions to hide an activity. SAR provides the all-weather capability and complements information available from other airborne sensors, such as optical or thermal-infrared sensors. More is available at http://www.sandia.gov/RADAR/sarapps.html.
Interferometry (3-D SAR) Interferometric synthetic aperture radar (IFSAR) data can be acquired using two antennas on one aircraft or by flying two slightly offset passes of an aircraft with a single antenna. Interferometric SAR can be used to generate very accurate surface profile maps of the terrain. IFSAR is among the more recent options for determining digital elevation. It is a radar technology capable of producing products with vertical accuracies of 30 centimeters RMSE. Not only that, but IFSAR provides cloud penetration, day/night operation (both because of the inherent properties of radar), wide-area coverage, and full digital processing. The technology is quickly proving its worth. More about IFSAR is available at http://www.geospatial-solutions.com/geospatialsolutions. On the Land The ability of SAR to penetrate cloud cover makes it particularly valuable in frequently cloudy areas such as the tropics. Image data serve to map and monitor the use of the land, and are of gaining importance for forestry and agriculture. -Geological or geomorphological features are enhanced in radar images thanks to the oblique viewing of the sensor and to its ability to penetrate - to a certain extent - the vegetation cover. -SAR data can be used to georefer other satellite imagery to high precision, and to update thematic maps more frequently and cost-effective, due to its availability independent from weather conditions. -In the aftermath of a flood, the ability of SAR to penetrate clouds is extremely useful. Here SAR data can help to optimize response initiatives and to assess damages. More is available at http://earth.esa.int/applications/data_util/SARDOCS/ Navigation, Guidance, and Moving Target Indication Synthetic aperture radar provides the capability for all-weather, autonomous navigation and guidance. By forming SAR reflectivity images of the terrain and then by correlation of the SAR image with a stored reference (obtained from optical device or a previous SAR image), a navigation update can be obtained. Position accuracies of less than a SAR resolution cell can be obtained. SAR may also be used to guidance applications by pointing or "squinting" the antenna beam in the direction of motion of the airborne platform. In this manner, the SAR may image a target and guide a munition with high precision. The motion of a ground-based moving target such as a car, truck, or military vehicle, causes the radar signature of the moving target to shift outside of the normal ground return of a radar image. New techniques have been developed to automatically detect ground-based moving targets and to extract other target information such as location, speed, size, and Radar Cross Section (RCS) from these target signatures. More is available at http://www.sandia.gov/RADAR/sarapps.html.
12
The track followed consisted of basic understanding of radar concepts, various types of radar systems, radar principles, waveform design and analysis, signal processing techniques, SAR system level design considerations, SAR processing and image formation.
13
Chapter
2
Background
In this chapter, the theoretical background required to understand this project is provided. It includes the basics of radar operation, the terminologies used in radar, performance metrics of radar systems, waveform design for radars, receiver design for radar, radar signal processing, fundamentals of SAR and SAR signal processing.
2.1 Introduction
2.1.1 What is Radar Radar is an acronym for RAdio Detection And Ranging. Radar systems use modulated waveforms and directive antennas to transmit electromagnetic energy into a specific volume in space to search for targets. Targets within a search volume will reflect portions of this energy (returns or echoes) back to the radar. These echoes are then processed by the radar receiver and signal processor to extract target information such as range(distance), velocity, angular position, and other characteristics of the target. Radars can be classified according to various criteria. These criteria may include the deployment (e.g. ground based, airborne, spaceborne, or ship based radar systems), operational characteristics (e.g. frequency band, antenna type, and waveforms utilized), nature of mission or purpose (e.g. weather, acquisition and search, tracking, track-while-scan, fire control, early warning, over the horizon, terrain following, and terrain avoidance radars). The mostly used classification is based upon the type of waveform and the operating frequency used. Considering the waveforms first, radars can be Continuous Wave (CW) or Pulsed Radars (PR). CW radars are those that continuously emit electromagnetic energy, and use separate transmit and receive antennas. Pulsed radars use a train of pulsed waveforms (mainly with modulation). In this category, radar systems can be classified on the basis of the Pulse Repetition Frequency (PRF), as low PRF, medium PRF, and high PRF radars.
14
Figure 2.1 A Basic Pulsed Radar System (Source: Radar Systems Analysis and Design using MATLAB by Bassem R. Mahafaza)
2.2.1 Range
Range is defined as the radial distance of the target from the aperture of the radar antenna. In the above figure, range is denoted by R. The targets range R is computed by measuring the time delay t ; it takes a pulse to travel the two-way path between the radar and the target. Since electromagnetic waves travel at c=3x108 m/s, the speed of light c, then R = ct / 2, where R is in meters and t is in seconds. The factor of 0.5 is needed to account for two-way delay.
Figure 2.2 PRF & PRI (Source: Radar Systems Analysis and Design using MATLAB by Bassem R. Mahafaza)
In the above figure, each pulse has a width and the time between to consecutive pulses is T. The time separation between two consecutive pulses is known as Inter Pulse Period (IPP) or Pulse Repetition Interval (PRI, denoted by T) and its inverse is known as Pulse Repetition Frequency (PRF, denoted by fr). PRI and the Range Ambiguity
15
The range corresponding to the two-way time delay T is known as the radar unambiguous range, Ru. To avoid ambiguity in range, once a pulse is transmitted the radar must wait a sufficient length of time so that the returns from targets at maximum range are back before the next pulse is emitted. It follows that the maximum unambiguous range must correspond to half of the PRI, i.e. Ru = cT/2 = c/2fr
Figure 2.3 Spectra of transmitted and received waveforms, and Doppler bank. (a) Doppler is resolved. (b) Ambiguous Doppler measurement
(Source: Radar Systems Analysis and Design using MATLAB by Bassem R. Mahafaza)
16
2.3.1 Detection
Detection means to decide whether a specific object is present in the coverage area of radar o not. This is done by comparing the amplitude of received pulses with a threshold. If it crosses the threshold, a target is resent and vice versa.
2.3.2 Tracking
Once an object has been detected, it may be desirable to track its location or velocity. A radar naturally measures position in a spherical coordinate system with its origin at the radar antennas phase center.
Figure 2.4 Spherical coordinate system for radar measurements (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
2.3.3 Imaging
Airborne radars are also used for imaging the earth terrain. Radars have there own electromagnetic illumination and hence are independent of light or weather conditions for imaging. Therefore, radar imaging enjoys this benefit over optical imaging devices.
Figure 2.5 A Single Range Sample (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
17
different range interval. These samples are also called range bins, range gates, or fast time samples. Following diagram illustrates fast time samples by one pulse.
Figure 2.6 A Row Containing Fast Time Samples (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Figure 2.7 A Slow / Fast Time Matrix (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
18
2.4.4 Multiple Pulses, Multiple Range Samples and Multiple Receiving Channels
If the radar uses more than one receiver (e.g. in the case of multiple phase center antenna, monopulse antenna, etc) and collects the above explained 2-D matrix through each channel, a three dimensional matrix or a cube is formed. The cube hence formed is called a radar datacube and is shown below.
Figure 2.8 A Datacube (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Figure 2.9 The Radar Datacube (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
19
Figure 2.10 Processing of the Datacube (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Figure 2.11 A 2D Matrix as a part of Datacube (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
20
21
Figure 2.12 An LFM Waveform as a function of time (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Figure 2.13 Up-chirp and Down-chirp (Source: Radar Systems Analysis and Design by Bassem R Mahafaza)
22
Figure 2.14 Cross-Range Resolution using Real Beam Radar (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
In the above figure, at a fixed range, two scatterers are considered to be just at the point of being resolved in cross-range when they are separated by the width of the (3 dB) main beam and
23
beamwidth of the antenna is az = / Daz where Daz is the width of antenna in the azimuth direction. Therefore, cross-range resolution of a real beam radar is R / Daz. Hence, in real beam radar, cross-range resolution is dependent upon operating range. The synthetic aperture viewpoint enables us to synthesize a virtual large antenna array. The physical antenna is one element of the synthetic array. Data is collected at each position sequentially, and then processed together. The effective aperture size is determined by the distance traveled while collecting a data set. The following figure shows the concept synthetic aperture array.
Figure 2.15 Collecting Highr Cross-Range Resolution Samples (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Cross-Range Resolution over an Aperture Time Ta Suppose we integrate data over an aperture time Ta, let the speed of the radar platform be v, then the resolution we will obtain can be found as: DSAR = vTa ; SAR = / 2vTa CR = R SAR = R / 2vTa Ta = R / 2vCR
24
For some given cross-range resolutions, the useable frequency and aperture times are given in the following graphs.
Figure 2.16 Usable Frequency vs. Aperture Time (Cross-Range Resolutions 1m and 5m) (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
The maximum practical value of aperture time is limited by the physical antenna on the platform. Any one scatterer contributes to the SAR data only over a maximum synthetic aperture size equal to the travel distance between two points shown, which equals to the width of the physical antenna beam at the range of interest, namely Raz. Maximum synthetic aperture size is the maximum distance traveled while target is illuminated. The corresponding maximum effective aperture time is Ro az / v.
Figure 2.17 Maximum Synthetic Aperture Size (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
25
Figure 2.18 Stripmap SAR Data Collection (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
Best Case SAR Stripmap Resolution Cross-range resolution corresponding to the maximum synthetic aperture size is given by SAR = / 2DSAR = / 2Raz = Daz / 2R CRmin = R SAR = Daz / 2 Hence, a larger potential aperture gives a finer resolution. Increase in effective aperture size exactly cancels increase in beamwidth with range. Conclusion By using synthetic aperture, cross-range resolution is now independent of range. By using synthetic aperture, cross-range resolution is now much smaller and comparable to range resolution. By using synthetic aperture, we have achieved huge integration gains.
Figure 2.19 Swath Length (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
26
Doppler Bandwidth
Figure 2.20 Doppler Bandwidth (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
The radial velocity relative to stationary scatterers differs across the physical beamwidth. The difference between the maximum and the minimum Doppler shift across area of the beamwidth is known as Doppler bandwidth. Upper and Lower Bounds on PRF The upper and lower bounds on the PRF of a sidelooking stripmap SAR are given by the relation: 2v / Daz <= PRF <= cDeltan / 2R Cross-Range Sampling Interval The Nyquist sampling interval for sidelooking stripmap SAR is Ts = Daz / 2v sec
27
Figure 2.21 From Point Spread Response to the final Image (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
28
DBS Operation Summary 1. Data is collected over an aperture time Ta selected to get desired cross-range resolution. 2. Doppler spectrum is computed in each range bin using DFT. It produces range-Doppler image. 3. Map Doppler axis to cross-range using
Figure 2.22 Block diagram of DBS Algorithm (Source: Fundamentals of Radar Signal Processing by Mark A. Richards)
2.12 References
i. Fundamentals of Radar Signal Processing by Mark A. Richards ii. Radar Systems Analysis using MATLAB by Bassem R.Mahafaza iii. http://www.wikipedia.com/
29
Chapter
3
Experiments, Simulations and Results
3.1 MATLAB Experiment to study LFM Characteristics
This section contains the results of an experiment performed using MATLAB to demonstrate various characteristics of an LFM Waveform. The output graph and list of observations for each graph is given below.
m 40 u r t c 35 e p S 30 e m i T 25 s u o u 20 n i t n 15 o C 10
0 -0.5
-0.4
-0.3
-0.2
0.2
0.3
0.4
0.5
Observations: Three chirp pulses of same pulse duration of 100e-6 seconds but with swept bandwidths of 100 KHz, 1 MHz and 10 MHz were generated at an over sampling factor of k=1.2. It is proved that as bandwidth increases sampling rate also increases. All the three spectra have the same bandwidth in the plot as the frequency is normalized.
30
The amplitudes of the spectra differed significantly. The spectrum amplitude is proportional to bandwidth. As bandwidth increased spectrum amplitude also increased.
Conclusion:It can be said that by increasing bandwidth time product we can approximate magnitude of Fourier transform of a chirp with a rectangle function.
3.1.2 Comparison of LFM pulse and Simple pulse Matched Filtering & Resolution in Time Domain
Matched Filter Of Simple & LFM Pulse 8 t u p 7 t u O 6 r e t l 5 i F d 4 e h c t 3 a M 2 x 10
-4
0 0
1 Time in seconds
2 x 10
-4
Figure 3.2 Matched filter outputs of simple pulse and LFM pulse
Observations: The approximate bandwidth of simple pulse is 5 KHz. A simple pulse & LFM pulse of k*p*b =10*100e-6*1e6 samples were generated. Same peak magnitudes were observed at matched filter output of both simple and LFM pulse. The peak-to-first-null width of mainlobe response of simple pulse is 1e-4 seconds. The peak-to-first-null width of mainlobe response of LFM pulse is 0.12e-4 seconds. Conclusions:As same peaks were observed for both the simple and LFM pulse therefore the two waveforms have same energy.
31
n i t u p t u O
Comparison of Unwindowed & Frequency Windowed Matched Filter Output Unwindowed Frequency Windowed
Observations: The loss in processing gain (LPG) is observed to be 7.74 dB. The PSLR without windowing is 13 dB & with frequency windowing it is 36.30 dB. The main lobe width before windowing is 0.02e-4 and after frequency windowing it is 0.1321e-4.
32
3.1.4
-110
1 Time in seconds x 10
2
-4
Observations: The loss in processing gain (LPG) is observed to be 6.5 dB. The PSLR without windowing is 14 dB & with time windowing it is 32 dB. The main lobe width before windowing is 0.02e-4 and after time windowing it is 0.05919e-4.
33
34
-5 t x 10 u 7 p t u O 6
r e t l i F
4 d e h 3 c t a M 2
0.1
0.2
0.3
0.7
0.8
0.9 x 10
1
-5
20
40
60
140
160
180
200
35
p t u O r e t l i F
0.1
0.2
0.3
0.7
0.8
0.9 x 10
1
-5
Figure 3.9 Matched Filter Output for 3 Scatterers After Frequency Domain Windowing
-50
-100
-150
-350
-400
0.1
0.2
0.3
0.7
0.8
0.9 x 10
1
-5
Figure 3.10 Matched Filter Output for 3 Scatteres After Time Domain Windowing
36
37
The function Pulse_Compression.m applies window on the Chirp_Template and calculates the receiver filter response which is matched to the transmitted waveform. The code listing for Pulse_Compression.m is given in Appendix A.
Out1
Upchirp System
range Constant2 dop Doppler variable 0.15 Lambda Product4 2 Constant5 Divide2 Velocity Frequency Range
38
The variable Chirp_Template saves a copy of the transmitted signal in the workspace to be later used in the receiver filter.
Chirp_Template
Product
Lin
UpChirp
Scope
Figure below shows the time domain representation of the transmitted LFM pulse train.
39
Cosine Wave
1 Pulse Generator Lin -133 Z Integer Delay UpChirp Product4 |u| Abs1 |u| Abs Scope2
Received Power
The Cosine Wave block provides a complex cosine wave which is used to model the Doppler shift caused by a moving target. The Integer Delay block is used to model the delay in receiving the return from a specific range.
40
The Radar Equation block is used to calculate the amplitude of the received signal. The figure below shows the returns from the target (changed in amplitude and frequency)
41
Product1
42
simout_uc To Workspace
FFT FFT |u| Pulse Transmitter UP Abs1 |u| Abs2 Scope2 1e-3
simout_fft_uc To Workspace1
1 Constant Out1
The figure below shows the input to the matched filter (lower half) and the compressed output of the matched filter (upper half)
43
This is the magnified input and output of the matched filter for one pulse.
At the end of the simulation, the following function is executed to find the target parameters.
44
The plot below gives the range and Doppler shift of the target.
Figure 3.24 Graph displaying the range and Doppler of the target
3.4.1 Assumptions
Operating Frequency = 2.7GHz Wavelength = 0.11m Height of Aircraft = 2km = 2000m Above Ground Level Velocity of Aircraft = 65 m/s = 127 Knots (for a helicopter) k = Oversampling rate = 2 Side looking synthetic aperture radar Squint Angle = 90 degrees Pencil beamwidth for simplicity i.e. Beamwidth Azimuth = Beamwidth Elevation Range Resolution without LFM is 1.5km = 1500m
45
=> G=160.007 Watts = 22 dB which is not acceptable 25dB <= G <= 30dB = Average Gain of an airborne radar Thus, changing the aperture size Aperture Size in Azimuth = Aperture Size in Elevation = 0.696m (for horn antenna) Beamwidth in Azimuth = Beamwidth in Elevation = 0.15964rads
=> G=309.9871=24.9 approx. 25 dB which is now acceptable Minimum Cross Range Resolution=0.348 m
3.4.3 Bandwidth, Range Resolution, Swath Length, Grazing Angle & Slant Range Calculations
k = oversampling rate = 2 height = 2 km Range Resolution without LFM is 1.5 km = 1500 m corresponds to pulse width of 1e-5 sec = 10 usec Bandwidth Time Product = BT = 500.
=> Bandwidth of LFM is 50 MHz which corresponds to Range Resolution with LFM = 3 m Ts = 1/kB = 1/(2*50e6) = 0.01usec No. of Fast time samples = Pulse width / Ts = 10usec / 0.01usec = 1000 samples (Greater than 1000 results in very high processing) Maximum Swath Length covered = 1503m = 1.503km Grazing Angle = 1.0882 deg = 0.0190 Slant Range = 2257.8638 = 2.258km
46
(2*v*BWaz)/lambda<=PRF<=(2/(Ls*cos(Grazing Angle)) for a sidelooking radar 186.7788Hz <=PRF<=215049.7 Hz or 0.187 KHz <=PRF<=0.21MHz Thus, we have to change either aperture time or slow time samples but we cant change slow time samples so we change aperture time. Assuming aperture time to be 3sec => Dsar = 195m PRI = Ta / M = 0.0029297 sec = 3msec. No of samples in PRI = PRI / Ts = 3msec / 0.01 usec = 300000 PRF = 1/PRI = 341.3333 Hz which is alright as 186.7788Hz <=PRF<=215049.7 Hz or 0.187 KHz <=PRF<=0.21MHz Unambiguous Range = 439453.167 m = 439.453km
3.4.6 Resolutions
Cross Range Resolution Real (Azimuth) = Cross Range Resolution Real (Elevation) = 360.4454m Cross Range Resolution Synthetic = 0.64327 metre Range Resolution without LFM is 1.5km = 1500m Range Resolution with LFM = 3m Doppler Resolution = 0.33333 Angular Resolution = 0.0002849
3.4.7 Bandwidths
Bandwidth LFM = 50MHz Bandwidth Simple Pulse = 0.1MHz Bandwidth Spatial = 1.5546Hz
3.4.8 Beamwidths
Beamwidth Azimuth = 0.15964 rads = 9.1467deg Beamwidth Elevation = 0.15964rads = 9.1467deg Beamwidth SAR = 2.82e-4rads = 0.0162deg
3.4.9 Aperture Area & Area Coverage Rate & Range Curvature Calculations
Aperture Area = 0.48443m^2 Area Coverage Rate = 97695m^2/sec Range Curvature = 2.1051 m
47
If Effective Temperature Te = 290 Kelvin & Thermal Temperature To = 300K than noise figure F=1.9667W If Atmospheric Attenuation = 0.0005 dB/KM than Atmospheric Loss = 1.00052 Watts If Gs = 50W than Pnoise = kToBFGs = 2.0355e-11Watts = -106.9dB If RCS sigma nought = 0.1 (target with small RCS) & Power Transmitted = Pt = 10W & System Loss = 25W than Power Received = Pr = 1.3488e-10Watts = -98.7dB
48
49
50
51
52
Chapter
4
Conclusions and Recommendations
This chapter contains the future recommendations for this project and highlights any discrepancies left in the project. Possibilities of further design and improvement are also presented.
53
constantly reconfigured for many different tasks, some digital signal processing, others more control or protocol oriented tasks. Resources such as processor core registers, internal and external memory, DMA engines, and IO peripherals are shared by all tasks, often referred to as threads (Fig 1). This creates ample opportunities for the design or modification of one task to interact with another, often in unexpected or non-obvious ways. In addition, most DSP algorithms must run in real-time, so even unanticipated delays or latencies can cause system failures. Common DSP software bugs are due to: - Failure of interrupts to completely restore processor state upon completion; - Blocking of a critical interrupt by another interrupt or by an uninterruptible process; - Undetected corruption or non-initialization of pointers; - Failure to properly initialize or disable circular buffering addressing modes; - Memory leaks or gradual consumption of available volatile memory due to failure of a thread to release all memory when finished; - Unexpected memory rearrangement by optimizing memory linkers/compilers; - Use of special mode instruction options in core; - Conflict or excessive latency between peripheral accesses, such as DMA, serial ports, L1, L2, and external SDRAM memories; - Corrupted stack or semaphores; and - Mixture of C or high-level language subroutines with assembly language subroutines. Microprocessor, DSP, and operating system (OS) vendors have attempted to address these problems with different levels of protection or isolation of one task or thread from another. Typically the operating system, or kernel, is used to manage access to processor resources, such as allowable execution time, memory, or to common peripheral resources. However, there is an inherent conflict between processing efficiency and level of protection offered by the OS. In DSPs, where processing efficiency and deterministic latency are often critical, the result is usually minimal or no level of OS isolation between tasks. Each task often requires unrestricted access to many processor resources in order to run efficiently. Compounding these development difficulties is incomplete verification coverage, both during initial development and during regression testing for subsequent code releases. It is nearly impossible to test all the possible permutations (often referred to as corner cases) and interactions between different tasks or threads which may occur during field operation. This makes software testing arguably the most challenging part of the software development process. Even with automated test scripts, it is not possible to test all possible scenarios. This process must be repeated after every software update or modification to correct known bugs or add new features. Occasionally, a new software release also inadvertently introduces new bugs, which forces yet another release to correct the new bug. As products grow in complexity, the number of lines of code will increase, as will the number of processor cores, and an even greater percentage of the development effort will need to be devoted to software testing. An FPGA is a more native implementation for most digital signal processing algorithms. Each task is allocated its own resources and runs independently. This intuitively makes more sense, to process an often continuously streaming signal in an assembly-line like process, with dedicated resources for each step. And the result is a dramatic increase in throughput. As the FPGA is inherently a parallel implementation, it offers much higher digital signal processing rates in nearly all applications. FPGA resources assigned can be tailored to the requirements of the task. The tasks can be broken up along logical partitions. This usually makes for a well defined interface between tasks, and largely eliminates unexpected interaction between tasks. Because each task can run continuously, the memory required is often much less than in a DSP, which must buffer the data and process in
54
a batch fashion. As FPGAs distribute memory throughout the device, each task is most likely permanently allocated the necessary dedicated memory. This achieves a high degree of isolation between tasks. The result is modification of one task being unlikely to cause an unexpected behavior in another task. This, in turn, allows developers to easily isolate and fix bugs in a logical, predictable fashion. The link between system reliability and design methodology is often underappreciated. Generally, discussions about development tools emphasizing engineering productivity increase. But as product complexities increase, an even greater portion of the overall engineering process is dedicated to testing and verification. This is where FPGA design methodology offers large advantages compared to software-based design verification. Fundamentally, FPGA design and verification tools are closely related to ASIC development tools. In practice, most ASIC designs are prototyped on FPGAs. This is a critical point, because bugs are most definitely not tolerated in ASICs. Unlike software, there is remote possibility of field upgrades to remedy design bugs in an ASIC. As time and development costs are very high, ASIC developers go to extreme lengths to verify designs against nearly all scenarios. This has led to test methodologies that provide nearly complete coverage of every gate under all possible inputs, accurate modeling of routing delays within the devices, and comprehensive timing analysis. Since FPGA verification tools are closely related cousins of their ASIC counterparts, they have benefited enormously from the many years of investment in the ASIC verification. The use of FPGA partitioning, test benches, and simulation models makes both integration and on-going regression testing very effective for quickly isolating problems, speeding the development process, and simplifying product maintenance and feature additions. These are crucial advantages in the FPGA vs DSP development process and will become increasingly important as the complexity of designs and the size of development teams increase. An FPGA vendor provides a comprehensive set of in-house and third-party tools to provide a unified tool flow for architecture, partitioning, floor planning, facilitated design intent, simulation, timing closure, optimization, and maintainability. In particular, architectural partitioning is integral to the design entry process. This partitioning, which normally includes chip resources required within the partition, is extended during timing closure and ongoing maintenance phases of the development, which guarantees a high degree of isolation. Each logical partition, as well as the overall design, can have independent test benches and simulation models. The development of the test benches during the FPGA design cycle can be reused to verify proper functionality in later changes. This will make system maintenance much simpler. The EDA industry is a large industry which continually drives the development of FPGA and ASIC test and verification tools. There is not a comparable counterpart in the software verification space. This may change as the industry realizes the enormous costs and challenges in software verification, but for now, the practical solution in the software world is to keep downloading the latest patch. Many engineering managers intuitively understand this. The rate of software updates to remedy bugs far exceeds the rate of comparable FPGA updates. It is expected and normal to roll out bug fixes on embedded software on a regular basis. With the availability of both low-cost and highend DSP-optimized FPGA devices, extensive IP cores, as well as the availability of high-level design entry methods and the inherent robustness of the design and verification processes, FPGAs will increasingly be the preferred choice for implementing digital signal processing. (Source: Michael Parker of Altera Corporation)
55
56
wide-area coverage, and full digital processing. The technology is quickly proving its worth. More about IFSAR is available at http://www.geospatial-solutions.com/geospatialsolutions.
4.4 References
i. White Paper by Micheal Parker ii. http://www.geospatial-solutions.com/geospatialsolutions iii. http://www.ccrs.nrcan.gc.ca/glossary/index_e.php?id=581
57
Appendix
A
MATLAB Code Listings
Codes used in Simulation of Pulse Compression
To use the GUI for pulse compression, enter the parameters in the input text boxes, select an option to view from the drop down menu and click start. Use matrix type of input e.g. [1 2 3] for multiple scatterers.
%LFM.m
function [xc,Xc,f,t,N]=LFM(taup,b,k) clc; N=k*b*taup; N=round(N); ts=1/(k*b); fs=1/ts; t=linspace(-taup/2,taup/2,N); f=linspace(-fs,fs,N); xc= exp(i*pi*(b/taup).*t.^2); Xc=fftshift(abs(fft(xc)));
%matched_filter.m
function [out,time,N,dpoints,dist]= matchedfilter(taup,b,k,scat_range,scat_rcs,scatno,rrec) clc; [xc,Xc,f,t,N]=LFM(taup,b,k); %x(scatno,1:N)=0; xcr(1:N)=0; %LFM pulse received G=1;Pt=10;Lam=0.1;c=3.e8; for j=1:length(scat_range) A=((Pt*(G*Lam)^2*scat_rcs(j))/((4*pi)^3*scat_range(j)^4))^0.5; x(j,:)=A*exp(i*pi*(b/taup).*(t+(2*scat_range(j)/c)).^2); xcr=x(j,:)+xcr; end %Matched filtering of Continuous Time LFM Chirp xct=xc; out=xcorr(xct,xcr); out=out./N; time=linspace(0,-taup/length(out)+2*taup,length(out)); rres=(c*taup)/2; dpoints = ceil(rrec * N /rres); dist=linspace(0,rrec,dpoints); figure(1); plot(dist,abs(out(N:N+dpoints-1)));
58
xlabel('Range in metres'); ylabel('Matched Filter Output'); figure(2); plot(time,abs(out)); xlabel('Time in sec'); ylabel('Matched Filter Output'); save matchedfilter.mat; clc;
%windowedoutput.m
function [out outf outt time timet]= windowedoutput(taup,b,k,scat_range,scat_rcs,scatno,rrec) clc; %Windowing in Frequecy Domain [out,time,N,dpoints,dist]= matchedfilter(taup,b,k,scat_range,scat_rcs,scatno,rrec); winf=hamming(length(out))'; IN=fftshift(fft(out)); OUTF=abs(IN).*winf; outf=fftshift(ifft(OUTF,length(OUTF))); figure(3) plot(time,20*log10(abs(out)),'r'); hold on plot(time,20*log10(abs(outf))); xlabel('Time in seconds'); ylabel('Frequecy Windowed Matched Filter Output in dBs'); hold off %Windowing in time domain wint=hamming(length(out)); WINt=fftshift(ifft(wint)); outt=conv(out,WINt); timet=linspace(0,-taup/length(outt)+2*taup,length(outt)); figure(4) plot(time,20*log10(abs(out)),'r'); hold on plot(timet,20*log10(abs(outt))); xlabel('Time in seconds'); ylabel('Time Windowed Matched Filter Output in dBs');
59
% even older version %%%%% alf = 1/(2*p*p*T*W); %%%%% git_chirp = exp( J*2*pi*alf*((nn-N/2).*(nn-N/2)) );
%SAR_DataSet.m
function [DataSet x zerosVector]=SAR_DataSet (xp, yg, Ta, F, h, v, K, B, Taw, PRF) tic clc c=3e8; lambda=c/F; fs=K*B; Dsar=v*Ta; du=v/PRF; t=0:1/(fs):Taw; x=cos(pi*(B/Taw)*t.^2); u=-Dsar/2:du:Dsar/2; for SP=1:length(yg) Rp(SP)=(h^2+yg(SP)^2)^0.5; R(SP,:)=Rp(SP)*( ones(1,length(u))+((u-xp(SP)).^2)/Rp(SP)^2).^0.5;
60
Phase(SP,:) = exp(-i*4*pi*R(SP,:)*F/c); m=1; for n=1:length(R(SP,:)) tr=2*R(SP,n)/c; t1=0:1/fs:tr; zerosVector(m)=length(t1); m=m+1; end if (SP==1) scale=min(zerosVector); end zerosVector=zerosVector-scale*ones(1,length(zerosVector)); for k=1:length(zerosVector) Ret=Phase(SP,k)*[zeros(1,zerosVector(k)) x zeros(1,1+max(zerosVector)-zerosVector(k))]; if(SP==1) DataSet1(k,:)=Ret; end if(SP==2) DataSet2(k,:)=Ret; end if(SP==3) DataSet3(k,:)=Ret; end if(SP==4) DataSet4(k,:)=Ret; end % if(SP==5) % DataSet5(k,:)=Ret; % end % if(SP==6) % DataSet6(k,:)=Ret; % end % if(SP==6) % DataSet6(k,:)=Ret; % end % if(SP==7) % DataSet7(k,:)=Ret; % end % if(SP==8) % DataSet8(k,:)=Ret; % end % if(SP==9) % DataSet9(k,:)=Ret; % end % if(SP==10) % DataSet10(k,:)=Ret; % end end end [r(1),col(1)]=size(DataSet1); [r(2),col(2)]=size(DataSet2); [r(3),col(3)]=size(DataSet3); [r(4),col(4)]=size(DataSet4); % [r(5),col(5)]=size(DataSet5); % [r(6),col(6)]=size(DataSet6); % [r(7),col(7)]=size(DataSet7);
61
% [r(8),col(8)]=size(DataSet8); % [r(9),col(9)]=size(DataSet9); % [r(10),col(10)]=size(DataSet10); ZC=max(col); DataSet1=[DataSet1 zeros(r(1),ZC-col(1))]; DataSet2=[DataSet2 zeros(r(2),ZC-col(2))]; DataSet3=[DataSet3 zeros(r(3),ZC-col(3))]; DataSet4=[DataSet4 zeros(r(4),ZC-col(4))]; % DataSet5=[DataSet5 zeros(r(5),ZC-col(5))]; % DataSet6=[DataSet6 zeros(r(6),ZC-col(6))]; % DataSet7=[DataSet7 zeros(r(7),ZC-col(7))]; % DataSet8=[DataSet8 zeros(r(8),ZC-col(8))]; % DataSet8=[DataSet9 zeros(r(9),ZC-col(9))]; % DataSet4=[DataSet4 zeros(r(10),ZC-col(10))]; DataSet=DataSet1+DataSet2+DataSet3+DataSet4; % DataSet=DataSet1+DataSet2+DataSet3+DataSet4+DataSet6+DataSet7+DataSet8+ DataSet9+DataSet10; figure(1);image(real(DataSet)); xlabel('Fast Time Samples'); ylabel('Slow Time Samples'); title('RAW DATA'); save SAR_DataSet.mat; clc toc
%FastTime.m
function [matchFilter]=FastTime(DataSet, x, zerosVector) tic clc; load SAR_DataSet; maxPoint=2*(max(zerosVector)+150)-1; matchFilter=zeros(length(zerosVector),(2*length(DataSet(1,:))-1)); for n=1:length(zerosVector) %chirp=DataSet(n,1:zerosVector(n)+150); chirp=DataSet(n,:); FTPC=xcorr(chirp,x); matchFilter(n,:)=[FTPC zeros(1,1+maxPoint-length(FTPC))]; %matchFilter(n,:)=[FTPC]; end figure(2); image(real(matchFilter)); xlabel('Pulse Compressed Fast Time Samples'); ylabel('Slow Time Samples'); title('Pulse Compressed'); save FastTime; toc
%SlowTime.m
function [slowTime]=SlowTime() tic clc load FastTime; [o,p] = size(matchFilter); WR = hamming(o); for I=1:length(matchFilter(1,:))
62
slowTime(:,I)=fftshift(abs(fft(WR.*matchFilter(:,I)))); end figure(3);image(10*log10(abs(slowTime))); xlabel('Pulse Compressed Fast Time Samples'); ylabel('Azimuth Compressed Slow Time Samples'); title('Azimuth Compressed'); save SlowTime.mat; toc
%AxisMapping.m
function [MappedSet]=AxisMapping() % Y[l,Fd]<==>Y[R,x] % l-> R=Ro + c*Ts*l/2 % F-> x=-lamda*R*Fd/2*v tic clc load SlowTime; lambda=c/F; Ro=mean(Rp); for i=1:length(slowTime(1,:)) Rm(:,i)=Ro+(0.5*c.*slowTime(:,i))./fs; end for i=1:length(slowTime(:,1)) MappedSet(i,:)=0.5.*Rm(i,:).*slowTime(i,:)./v; end figure(4);image(real(MappedSet)); xlabel('Range'); ylabel('Cross Range'); title('Axis Mapped Image'); save AxisMapping; toc
%Deshirped.m
function [DeChirped]=Dechirped() tic clc load AxisMapping; u=u'; for j=1:length(MappedSet(1,:)) DeChirped(:,j)=matchFilter(:,j).*exp(i*((2*pi*u.^2)./(Rm(:,j).*lambda)) ); DeChirped(:,j)=fftshift(abs(fft(WR.*DeChirped(:,j)))); end figure(5);image(real(DeChirped)); xlabel('Pulse Compressed Fast Time Samples'); ylabel('Range Compressed Slow Time Samples'); title('Azimuth Compressed Without Dechirp'); save DeChirped.mat; toc
63
Appendix
B
Fast Fourier Transform
What is FFT
The fast Fourier transform (FFT) is a discrete Fourier transform algorithm which reduces the number of computations needed for N points from to 2NlogN where log is the base-2 logarithm. If the function to be transformed is not harmonically related to the sampling frequency, the response of an FFT looks like a sinc function (although the integrated power is still correct). Aliasing (leakage) can be reduced by apodization using a tapering function. However, aliasing reduction is at the expense of broadening the spectral response. The basic DFT (Discrete Fourier Transform) is
Direct computation requires about 4N multiplications and 4N additions for each k (a complex multiplication needs 4 real multiplications and 2 real additions). For all N coefficients, gives about 8N2 operations. Generally, we use FFT to refer to algorithms which work by breaking the DFT of a long sequence into smaller and smaller chunks. Algorithms for computing the DFT which are more computationally efficient than the direct method (better than proportional to N2) are called Fast Fourier Transforms.
64
Define 3 different values for N. Then take the transform of x[n] for each of the 3 values that were defined. The abs function finds the magnitude of the transform, as we are not concerned with distinguishing between real and imaginary components. N1 = 64; N2 = 128; N3 = 256; X1 = abs(fft(x,N1)); X2 = abs(fft(x,N2)); X3 = abs(fft(x,N3)); The frequency scale begins at 0 and extends to N - 1 for an N-point FFT. We then normalize the scale so that it extends from 0 to 1 1/N. F1 = [0 : N1 - 1]/N1; F2 = [0 : N2 - 1]/N2; F3 = [0 : N3 - 1]/N3; Plot each of the transforms one above the other. subplot(3,1,1) plot(F1,X1,'-x'),title('N = 64'),axis([0 1 0 20]) subplot(3,1,2) plot(F2,X2,'-x'),title('N = 128'),axis([0 1 0 20]) subplot(3,1,3) plot(F3,X3,'-x'),title('N = 256'),axis([0 1 0 20]) Upon examining the plot (shown in figure B.1) one can see that each of the transforms adheres to the same shape, differing only in the number of samples used to approximate that shape. What happens if N is the same as the number of samples in x[n]? To find out, set N1 = 30.
65
66
Appendix
C
Decibel Arithmetic
The decibel, often called dB, is widely used in radar system analysis and design. It is a way of representing the radar parameters and relevant quantities in terms of logarithms. Gain in dB (in terms of power) Gain in dB (in terms of voltage) Gain in dB (in terms of current) Definition of dB Calculating Inverse dB log(Po / Pi ) log(Vo / Vi)2 log(Io / Ii)2 10log(Po / Pi) = 10log(Vo / Vi)2 = 10log(Io / Ii)2
67
Appendix
D
Speed Unit Conversions
Speed Conversion Formulae
To convert between miles per hour (mph) and knots (kts): Speedkts = 0.868391 x Speedmph Speedmph = 1.15155 x Speedkts To convert between miles per hour (mph) and meters per second (m/s): Speedm/s = 0.44704 x Speedmph Speedmph = 2.23694 x Speedm/s To convert between miles per hour (mph) and feet per second (ft/s): Speedf/s = 1.46667 x Speedmph Speedmph = 0.681818 x Speedft/s To convert between knots (kts) and meters per second (m/s): Speedm/s = 0.514791 x Speedkts Speedkts = 1.194254 x Speedm/s To convert between knots (kts) and kilometers per hour (km/h): Speedkm/h = 1.185325 x Speedkts Speedkts = 0.539593 x Speedkm/h
68