Promax Processing Seismic Marine Textbook
Promax Processing Seismic Marine Textbook
Promax Processing Seismic Marine Textbook
ABSTRACT
Detailed submarine digital analysis of side scan sonar images significantly
enhances the ability to assess seafloor features and artifacts digital images. These
images are usually poor in their resolution if they are compared with optical
images. There are commercial solutions that could solve this trouble, such as: the
use of high resolution multibeam sidescan sonar, or the use of bathymetric sonar.
Present work shows an economical solution to avoid this kind of problem by using
digital image processing techniques under MATLAB environment. The applica-
tion presented here is easy to use and has been developed under user friendly phi-
losophy and could be operated for users at any level. Two types of sonar surveys,
seafloor mapping and submerged target searches (buried or not), each require dif-
ferent processing methods for data analysis. This work is the first step and a gener-
al purpose tool for future lines of research in submerged objects recognition.
Results are comparable in quality with commercial hardware solutions.
1 Professor, Electronics Technology, Systems and Automation Engineering Department (TEISA). University of Cantabria,
Higher Technical Navigation School, C/Gamazo nº 1 39004 Santander (Cantabria), Spain. Email: isabel.zamanilloB @unican.es,
Tel. +34 942 201331. 2 Professor, Communications Engineering Department (DICOM). University of Cantabria. I+D+i Telecom-
munication Building, Plaza de la Ciencia, Av. de los Castros s/n, 39005 Santander (Cantabria), Spain. Email: jose.zamanillo@uni-
can.es, Tel. +34943202219, Fax. +34942201488. 3 Professor, Universidad Electronics Technology, Systems and Automation En-
gineering Department (TEISA). University of Cantabria, Higher Technical Navigation School, C/Gamazo nº 1 39004 Santander
(Cantabria), Spain. Email: [email protected], Tel. +34942201368, Fax. +34942201303.
INTRODUCTION
Digital images captured from the echoes of sidescan sonar onboard an unmanned
undersea vehicle, are usually characterized for their low resolution. This is so
because underwater, sound transmission is limited and this is most notable in use-
able ranges. The usable range of high frequency sound energy is greatly reduced by
seawater, typically to around 50 to 150 m (Blondel, P. 2009). Low frequency sound
energy is reduced at a much lesser rate with usable ranges of in excess of 250 m.
achievable. Therefore, a tradeoff exists between higher resolution images produced
by a high frequency side scan sonar and the longer range provided by a low frequen-
cy side scan sonar. In analyzing digital side scan sonar data numerous techniques
have been demonstrated to correct and enhance the imagery as well as aid in inter-
pretation (Lurton, X. 2002) (Medwin H. and Clay C.S. 1998)).
The collaboration of two research groups from different departments of the Uni-
versity of Cantabria have been necessary in the development of low cost and flexible
software for digital processing of sidescan sonar images under MATLAB environ-
ment by using digital filtering and advanced signal processing techniques. Some of
these techniques are already used in DVB digital video systems and broadcasting,
with excellent results (Hardie, R.C. and Barner, K.E. 1996).
With these systems, the researcher can move relatively quickly to a few meters
from the sea floor, following the topography of the same and recording video images
to find a place or object of interest. The problem presented by these vehicles is that
the image quality is degraded in terms of depth, and optical systems that capture
images need powerful lighting systems, which in the case of autonomous vehicles
(AUVs) without power connection through the umbilical wire to the surface is not
feasible and sidescan digital sonar images are used against optical ones. Figure 1 (a)
and 1 (b) show the ROV and AUV recently acquired by the University of Cantabria,
and used in this work. Figure 1 (a) shows the ROV, model Seaeye Falcon from the
Swedish company SAAB, this vehicle is an auxiliary rescue vehicle equipped with an
articulated arm. It is possible to use optical image recording underwater since it has
powerful illuminators fed from the surface. This vehicle lacks of a sonar device
(installation will be considered in the future) and this is the reason that direct com-
parison of optical images and the acoustic images is not available at this point.
Figure1: ROV and AUV owned by the University of Cantabria. (a) ROV, model Seaeye Falcon
from SAAB company. (b) AUV, model C’Inspector from Kongsberg company.
Figure 1 (b) shows the AUV, model C’Inspector from the Norwegian company
Kongsberg, this is an autonomous vehicle equipped with a high speed optical fiber data
connection of 1Km length. The vehicle can be used for inspection tasks in the back-
ground and detection of submerged objects and also it has been equipped with a.
Tritech SeaKing Sidescan Sonar with 675 kHz of operating frequency and chirp mod-
ulation. This device has a narrow beam and shorter range (100m) for more detailed
images of closer targets. Technical characteristics of this sonar are shown in Table I.
Chirp side scan sonar utilizes pulse compression techniques to produce long transmis-
sion pulses and achieve long range without a resultant decrease in across-track resolu-
tion. The commercial implementation of Chirp side scan sonar is in a single beam con-
figuration. Underwater, sound transmission is limited and this is most notable in use-
able ranges. The usable range of high frequency sound energy is greatly reduced by sea-
water, typically to around 50 to 100m (Blondel, P. 2009). Low frequency sound energy
is reduced at a much lesser rate with usable ranges of in excess of 200m achievable.
Characteristics Value
Operating Frequency 675 kHz (chirp modulation)
Horizontal Beam width (-3dB) 1° 0.5°
Vertical Beam width (-3dB) 50°
Weight in air/water 5.3kg/2.7kg
Maximum operating depth 4000 m
Power Requirements 18-36V@12VA
Control Connector Tritech Sonar Connector
Transmitter Source Level 200 dB re 1μP @ 1 m
Transmitter Pulse Length 50 - 200 μs
Receiver Sensitivity > 2μV rms
Gain Control Range 80 dB
Display Dynamic Range 40 dB (Software Configurable)
Data Sampling Rates 5 - 200 μs
Data Resolution 4-8 bits (Software Configurable)
Software Tritech Seanet Display Software or low level language
commands
Data file format Proprietary Tritech “V4Log”
Communication Protocols Arcnet, RS-232
Communication data rates RS232 hasta 115.2 kbaud, Arcnet 156 ó 78 kbaud
point on the seabed or target is called the slant range. It should not be confused with
the ground range, between this point the point immediately below the sonar. The
angle of incidence of the incoming acoustic wave is a key factor in understanding
how it will scatter. Most of the energy is reflected in the specular direction. Some will
be reflected along other angles (scattering angles, distributed along the main refec-
tion angle). Depending on the seafloor or the submerged target, some energy will be
lost in the seabed. A very small portion, several orders of magnitude lower, might be
reflected back toward the imaging sonar also known as backscatter (Blondel and
Murton, 1997). The seabed reverberation area is mainly constituted by background
noises (Zhang Xiao-wei, Zheng Xiong-bo and Shen Yang, 2010).
The brightness of
Figure 2: Definitions of basic geometric parameters used in sidescan
sonar imagery.
sonar image is related
to the ratio between
the echoes to the
noise, if a comparison
with ordinary optical
images is made, sonar
images are low fre-
quency images and
they have less detail,
and the background
noises of sonar images
are high-frequency
impulse noises with
larger amplitudes rela-
tive to the multiple
echoes from the target
area. Because of the
Figure 3: Comparison between a sidescan sonar image and the photograph
of a tire in the Bay of Santander.
complexity of the un-
derwater environ-
ment, the gray level or
monochrome color of
sonar image from the
target area is usually
smaller than that of
(a) Sidescan sonar image with noise (b) Photograph image of the same tire made background noise. To
of a tire in the Bay of Santander. with the ROV improve the visual ef-
fects and reduce the
influences of the noise, it is very important to remove noises of sonar images, as it is
shown in Fig. 3 (a) for a grayscale sidescan image. Fig 3 (b) shows a photograph of the
same tire on a sandy seafloor.
niques, it has been implemented in the program 14 additional filters from the image
processing MATLAB toolbox. The user of ImageEasySonar can select among 17 dif-
ferent image process techniques, which are widely detailed in the literature (Canny, J.
1986), (Corinthios, M., 1999), (Longbotham, H. and Eberly, D., 1993), (Mitra, S.K., et
al., 1991), (Jensen, John R. 1996), (Zhuo, S. Guo D. and Sim, T., 2010).
The Comparison and Selection (CS) filter is one of the simpler enhancement filters
(Lee Y. and Fam A, 1987). As an example, we give a brief explanation about this filters
works. The first step is to choose the color space to apply the transformation, if grayscale
is chosen the mathematical transformation is apply to one layer. If RGB space is chosen
it is necessary to apply separately the technique layer by layer (Red layer, Green layer
and Blue layer) as it is shown in figure 5(a) and Fig 5 (b). Values are in the integer range
from 0 up to 255. This feature expands by other tree the computation time.
The second step consist in
properly choose the geometry
(linear, square, circular, etc)
and size (3x3 pixels, 5x5 pixel,
9x9 pixels or 11x11 pixels), of
(a) B&W photograph
the movable exploring window
of the lighthouse of or kernel, which size makes
Mouro located in de
Bay of Santander (one
more visible the edges as
layer is only needed to shown in Fig. 6 (c), or soften as
process the image)
it is shown in figure 6 (b).
Third step consist to properly
fix the J parameter of the algo-
rithm, this parameters fixed
the distance (in pixels) from
the mean of all numerical val-
ues of the kernel renumbered
from reordered from the min-
imum to the maximum ac-
cording with the following.
(b) Color photograph of the The output Yk of the CS
lighthouse of Mouro located
in de Bay of Santander (three filter with parameter J at time
layer R, G and B are needed to k is defined through the input
process the image)
values Xk-N, …, Xk+ N, in a
Figure 5: The color space is an important election to apply spatial window of length 2 N+ 1, for
digital filtering. a positive integer N by:
¬¯ ( N 1 J ), if Nk s M k
Yk X k( N 1 J ) (1)
¯® X k , otherwise
where Xk(i) is the ith smallest sample inside the window, μk and Mk are the sample
mean and median, respectively, and J is an integer satisfying 1 ≤ J ≤ N.
When J = N, the CS filter selects either the minimum or the maximum value in
the window depending on the data. For the case of J = 0, this filter reduces to the well
investigated median filter. Since CS filters are nonlinear, the superposition property
does not hold in its general form. (Lee Y. and Fam A., 1987). The CS filter ranks the
values in the filter window or kernel in numerical order, and calculates the mean
value. Parameter J identifies a pair of rank numbers (measured inward from the top
and bottom of the rank list) whose corresponding raster values provide the two pos-
sible filter output values. If the center cell value is less than the window mean, the
lower output value is assigned, and if it is greater than the mean, the higher output
value is used. The CS filter sharpens blurred edges while smoothing non-edge areas.
The sharpening effect increases with lower values of Parameter J (which move the
filter output values farther from the mean).
Some of the results obtained when applying digital filtering techniques just been
discussed and depicted in Figure 6. For brevity the mathematical expressions used in
each filtering technique are not shown here and we have ignored many of the results
obtained for other images. Another interesting feature of the application is that user
can apply several different digital filters on the same image and visualize graphically,
original image and up to two different transformations before save them into the
hard disk An important parameter the execution time of each digital filtering
process, which is dependent on the selected parameters in the application as shown
in Figure 6 (b) and Figure 6 (c) obtained as the digital processing from the image
shown in Figure 6 (a).
Figure 6 (d) shows the result of applying the algorithm edge (it is noticeable that
the calibration grid is visible) to the image, while filtering when combined CS plus
edge filtering, result not only removes the grid also shows in detail the edges of the
processed image. The edge detection class is designed to detect and highlight bound-
aries between image areas that have distinctly different brightness. The output raster
is a grayscale image of the edges, with the cell brightness proportional to the differ-
ence in neighboring cell brightness in the original image. The resulting image can be
used as the basis for additional image interpretation and analysis, such as image seg-
mentation.
Filtering techniques based on the Prewitt filters and/or Sobel shown in Figure 6
(f) and Figure 6 (g) require a lower computational cost than the above-mentioned
CS, and EDGE WWMED. In our case, the result not only gives different color to the
original image but it gives an appearance of high-relief, or false 3D. Results are simi-
lar results to those obtained with more expensive hardware. As for future work,
authors wish to apply these techniques in real-time time or at least “quasi-real” time
in this part of the research the main goal has been the computational speed versus
quality processing.
(a) Original Image (with grid) (b) Processed image with a CS Filter (3x3, J=4 window),
(no grid is visible)
(c) Processed image with a CS Filter (7x7, J=2 (d) Processed image with an EDGE filter
window), (grid appears again) (grids are also detected)
(e) Processed image with a CS 3x3 y J=0.4 (f) Result of apply a Prewitt filter with the original image
followed by an EDGE filter
Figure 6: Aspect of the interface and results of digital processing sidescan sonar images
using ImageEasySonar softwar.
The hardware and software characteristics used in this study, are presented here:
— PC compatible with μP AMD Athlon64 X2 Dual Core @ 3.01 GHz y 2 GB
RAM DDR memory.
— Operative System Windows XP Professional v2002 with Service Pack2.
— MATLAB R2009b 7.9.0.
CPU computation time is less than 5% with about 40 processes running at that
time and increasing from 50 to 55% during the processing of the filter applied. The
memory usage during the process of computing including loading the program itself
in any case not exceeds 90 MBytes. Figure 6 (h) shows the zoom tool included in the
application that allows seeing the details more clearly, even compared to the initial
image. The novelty of the work presented here is the use of these techniques
designed for static and DVB video optical captured images to “acoustically” ones
from low resolution single sidescan sonar.
CONCLUSIONS
It has developed an application software that allows processing images from a sides-
can sonar by using techniques of post-digital signal processing, some of them
already used in DVB digital video systems. The intuitive interface to use has been
programmed under user-friendly, and WYSIWYG (What you see is what you get)
philosophies and could be operated for users at any level for research or educational
purposes. Furthermore, the application allows the overlapping of different filtering
techniques and improving the image quality on the same box or frame. By applying
the Prewitt and Sobel filters is possible to get an approximate high relief profile of the
seafloor without the need of acquires expensive hardware like: multibeam side-scan
sonar or a slow scan by bathymetric sonar.
ACKNOWLEDGMENTS
Authors would like to thank the Scientific and Technical Services Research (SCTI),
of the University of Cantabria under the supervision of Vicerrectorado de Investi-
gación y Transferencia Tecnológica, and the Regional Government of Cantabria for
the logistical and financial support without this work has not been realized.
REFERENCES
Antich, J. et al. (2005). "A PFM-based control architecture for a visually guided underwater
cable tracker to achieve navigation in troublesome scenarios". Journal of Maritime
Research, Vol. II, No. 1, pp. 33-50.
Bellingham, J.G. et al. (1994): A second generation survey AUV, in Proc. Autonomous Under-
water Vehicle Technology, July pp. 148–155.
Blondel, P. (2009): The Handbook of Sidescan Sonar. Springer/Praxis, Heidelberg, Ger-
many/Chichester, U.K., 316 pp.
Blondel, P. and B.J. Murton (1997): Handbook of Seafloor Sonar Imagery. Wiley/Praxis,
Chichester, U.K.
Bowens, A. (2009): Underwater Archaeology: The NAS Guide to Principles and Practice Sec-
ond Edition, Blackwell Publishing, UK, 220pp.
Canny, J. (1986): A Computational Approach to Edge Detection, IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, Vol. PAMI-8, No. 6, pp. 679-698.
Cheung, W.W.L.; Lam, V.W.Y.; Sarmiento, J.L.; Kearney, K.; Watson, R. and Pauly, D. (2009):
Projecting global marine biodiversity impacts under climate change scenarios. Fish and
Fisheries, Willey. Vol. 10, pp. 235–251.
Corinthios, M. (1999): Signals, Systems, Transforms, and Digital Signal Processing with
MATLAB. CRC Press, pp. 1316.
Dahms, Hans-Uwe and Hwang, Jiang-Shiou, (2010) :Perspectives of underwater optics in bio-
logical oceanography and plankton ecology studies. Journal of Marine Science and Tech-
nology, Vol. 18, No. 1, pp. 112-121
Drury, Stephen A. (2001): Image Interpretation in Geology (3rd ed.). Chapter 5, Digital Image
Processing. New York: Routledge. pp. 133-138.
Hardie, R.C.; Barner, K.E. (1996): Extended permutation filters and their application to edge
enhancement. IEEE Transactions on Image Processing, Vol. 5, No. 6, Jun, pp.855-867
Hardie, R.C. and Boncelet, C., (1993): LUM filters: a class of rank-order-based filters for
smoothing and sharpening, IEEE Transactions on Signal Processing, Vol. 41, pp. 1061-
1076.
Hardie, R.C. and Boncelet, C.G., (1995): Gradient-based edge detection using nonlinear edge
enhancing prefilters, IEEE Transactions on Image Processing, Vol.4 (11), 1572-1577.
Ishoy, A. (2000): How to make survey instruments “AUV-friendly”,” in Proc. MTS/IEEE Int.
Conf. OCEANS, pp. 1647–1652.
Jensen, J. R., (2005): An Introductory Digital Image processing: A remote sensing perspective
Image enhancement, Prentice-Hall, UpperSaddle River N.J. 526 pp.
Jensen, John R. (1996): Introductory Digital Image Processing: a Remote Sensing Perspective
(2nd ed.). Chapter 7, Image Enhancement. Upper Saddle River, NJ: Prentice-Hall. pp.
153-172.
Lee Y. and Fam A, (1987): An edge gradient enhancing adaptive order statistic filter. IEEE
Transactions on Acoustics, Speech, and Signal Processing Vol.35, pp. 680-695
Lillesand, Thomas M. and Kiefer, Ralph W. (1994): Remote Sensing and Image Interpretation
(3rd ed.). Chapter 7, Digital Image Processing. New York: John Wiley and Sons. pp. 585-
618.
Lim, Jae S.,(1990): Two-Dimensional Signal and Image Processing, Englewood Cliffs, NJ,
Prentice Hall, pp. 478-488.
Longbotham, H. and Eberly, D., (1993): The WMMR filters: a class of robust edge enhancers, ,
IEEE Transactions on Signal Processing Vol. 41, pp. 1680-1685.
Lurton, X. (2002): An Introduction to Underwater Acoustics. Springer/Praxis, Heidelberg,
Germany/Chichester, U.K., 380 pp.
Marani, G. (2009): Advances in autonomous underwater intervention for AUVs, in Work
Shops and Tutorials, 2009 IEEE International Conference on Robotics and Automation.
Kobe, Japan.
Medwin, H. and Clay, C.S. (1998): Fundamentals of Acoustical Oceanography. Academic
Press, London, 712 pp.
Mitra, S.K.; Li, H.; Lin, I.-S. and Yu, T.-H. (1991): A new class of nonlinear filters for image
enhancement, Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 Interna-
tional Conference on 4, 2525-2528.
Padmavathi, G.et al. (2010): Comparison of Filters used for Underwater Image Pre-Process-
ing. IJCSNS International Journal of Computer Science and Network Security, VOL.10
No.1, January, pp.58-65.
Palomeras, N. et al.,(2010): Distributed Architecture for Enabling Autonomous Underwater
Intervention Missions, in Proceedings Of the 4th Annual IEEE International System
Conference. San Diego, USA.
Parker, James R., (1996): Algorithms for Image Processing and Computer Vision, New York,
John Wiley & Sons, Inc., pp. 23-29.
Poularikas, A.D. and Ramadan, Z.M. (2006): Adaptative Filtering Primer with MATLAB. Tay-
lor & Francis. CRC Press.
Russ, J.C. (2007): The Image Processing Handbook. 5th Edition. CRC Press.pp.817.
Taiho Koh and Powers, E., (1985): Second-order Volterra filtering and its application to non-
linear system identification, IEEE Transactions on Acoustics, Speech, and Signal Process-
ing, Vol. 33, pp. 1445-1455.
Thurnhofer, S. and Mitra, S.K., (1996): A general framework for quadratic Volterra filters for
edge enhancement, IEEE Transactions on Image Processing, Vol. 5 pp. 950-96.
Urick, R.J. (1996): Principles of Underwater Sound, Third Edition. Peninsula Publishing, Los
Altos, CA, 444 pp.
Uvais Qidwai and C.H. Chen: Digital image processing, An algorithmic approach with MAT-
LAB. Chapman and Hall Textbooks in computing. CRC Press ISBN 978-1-4200-7950-0.
Von Alt, C., et al. (2001): Hunting for mines with REMUS: A high performance, affordable,
free swimming underwater robot, in Proc. MTS/IEEE Int. Conf. OCEANS, pp. 117-122.
Wang, Q. and Wang, X. (2010): Interferometric Fiber Optic Signal Processing Based on
Wavelet Transform for Subsea Gas Pipeline Leakage Inspection. International Confer-
ence on Measuring Technology and Mechatronics Automation (ICMTMA 2010), pp.
501-504.
Zhang Xiao-wei; Zheng Xiong-bo and Shen Yang, (2010): Sidescan sonar image de-noising
algorithm in multi-wavelet domain. Computer Application and System International
Conference on Modeling (ICCASM 2010), Vol: 2, pp. 367-371.
Zhuo, S.; Guo D. and Sim, T. (2010): T. Robust Flash Deblurring, IEEE Conference on Com-
puter Vision and Pattern Recognition (CVPR), pp. 2440-2447.