Concealed Weapon Detection Using Digital Image Processing: Abstract
Concealed Weapon Detection Using Digital Image Processing: Abstract
com
We have recently witnessed the series of bomb blasts in Mumbai. Bombs went of in buses and
underground stations. And killed many and left many injured. On July 13th seven explosions took place
with in one hour. And left the world in shell shock and the Indians in terror. This situation is not limited to
Mumbai but it can happen anywhere and any time in the world. People think bomb blasts can’t be
predicted before handled. Here we show you thetechnology, which predicts the suicide bombers and
explosion of weapons through IMAGE PROCESSING FOR CONCLEAD WEAPON DETECTION.
The detection of weapons concealed underneath a person’s clothing is very much
important to the improvement of the security of the general public as well as the safety of public assets
like airports, buildings, and railway stations etc. Manual screening procedures for detecting concealed
weapons such as handguns, knives, and explosives are common in controlled access settings like airports,
entrances to sensitive buildings and public events. It is desirable sometimes to be able to detect concealed
weapons from a standoff distance, especially when it is impossible to arrange the flow of people through a
controlled procedure In the present paper we describe the concepts of the technology ‘CONCEALEAD
WEAPON DETECTION’ the sensor improvements, how the imaging takes place and the challenges. And
we also describe techniques for simultaneous noise suppression, object enhancement of video data and
show some mathematical results.
Introduction:
Till now the detection of concealed weapons is done by manual screening procedures. To control
the explosives in some places like airports, sensitive buildings, famous constructions etc. But these
manual screening procedures are not giving satisfactory results, because this type of manual screenings
procedures screens the person when the person is near the screening machine and also some times it gives
wrong alarm indications so we are need of a technology that almost detects the weapon by scanning. This
can be achieved by imaging for concealed weapons. The goal is the eventual deployment of automatic
detection and recognition of concealed weapons. It is a technological challenge that requires innovative
solutions in sensor technologies and image processing. The problem also presents challenges in the legal
arena; a number of sensors based on different phenomenology as well as image processing support are
being developed to observe objects underneath people’s clothing.
Imaging Sensors:
These imaging sensors developed for CWD applications depending on their portability, proximity
and whether they use active or passive illuminations. The different types of imaging sensors for CWD
based are shown in following table.
www.btechzone.com
www.btechzone.com
1.Infrared Imager:
Infrared imagers utilize the temperature distribution information of the target to form an image.
Normally they are used for a variety of night-vision applications, such as viewing vehicles and people.
The underlying theory is that the infrared radiation emitted by the human body is absorbed by clothing
and then re-emitted by it. As a result, infrared radiation can be used to show the image of a concealed
weapon only when the clothing is tight, thin, and stationary. For normally loose clothing, the emitted
infrared radiation will be spread over a larger clothing area, thus decreasing the ability to image a weapon.
2. P M W Imaging Sensors:
First Generation:
Passive millimeter wave (MMW) sensors measure the apparent temperature through the energy
that is emitted or reflected by sources. The output of the sensors is a function of the emissive of the
objects in the MMW spectrum as measured by the receiver. Clothing penetration for concealed weapon
detection is made possible by MMW sensors due to the low emissive and high reflectivity of objects like
metallic guns. In early 1995, the MMW data were obtained by means of scans using a single detector that
Took up to 90 minutes to generate one image.
Following figure1 (a) shows a visual image of a person wearing a heavy sweater that
conceals two guns made with metal and ceramics. The corresponding 94-GHz radiometric
image.figure1 (b) was obtained by scanning a single detector across the object plane using a mechanical
scanner. The radiometric image clearly shows both firearms.Figure: 1(a) visible and 1(b) MMW image of
a person concealing 2 guns
beneath a heavy sweater
Second Generation:
Recent advances in MMW sensor technology have led to video-rate (30 frames/s) MMW
cameras .One such camera is the pupil-plane array from Terex Enterprises. It is A 94-GHz radiometric
pupil-plane imaging system that employs frequency scanning to achieve vertical resolution and uses an
www.btechzone.com
www.btechzone.com
array of 32 individual wave-guide antennas for Horizontal resolution. This system collects up to 30
frames/s of MMW data. Following figure shows the visible and second-generation MMW images of an
individual Hiding a gun underneath his jacket. It is clear from the figures 1(b), 2(b) that the image quality
of
the camera is degraded.
``
FIGURE 2a) visual image 2b) second-generation image of a person concealing a handgun beneath a
jacket.
By fusing passive MMW image data and its corresponding infrared (IR) or electro-optical (EO)
image, more complete information can be obtained; the information can then be utilized to facilitate
concealed weapon detection. Fusion of an IR image revealing a concealed weapon and its corresponding
MMW image has been shown to facilitate extraction of the concealed weapon. This is illustrated in the
example given in following figure 3a) Shows an image taken from a regular CCD camera, and Figure3b)
shows a corresponding MMW image. If either one of these two images alone is presented to a human
operator, it is difficult to recognize the weapon concealed underneath the rightmost person’s clothing. If a
fused image as shown in Figure 3c) is presented, a nhuman operator is able to respond with higher
accuracy. This demonstrates the benefit of image fusion for the CWD application, which integrates
complementary information
From multiple types of sensors.
www.btechzone.com
www.btechzone.com
An image processing architecture for CWD is shown in Figure 4.The input can be multisensor (i.e.,
MMW + IR, MMW + EO, or MMW + IR + EO) data or only the MMW data. In the latter case, the blocks
showing registration and fusion can be removed from Figure 4. The output can take several forms. It can
be as simple as a processed image/video sequence displayed on a screen; a cued display where potential
concealed weapon types and locations are highlighted with associated confidence measures; a “yes,”
“no,” or “maybe” indicator; or a combination of the above. The image processing procedures that have
been investigated for CWD applications range from simple denoising to automatic pattern recognition.
Before an image or video sequence is presented to a human observer for operatorassisted weapon
detection or fed into an automatic weapon detection algorithm, it is desirable to preprocess the images or
video data to maximize their exploitation. The preprocessing steps considered in this section include
enhancement and filtering for the removal of shadows, wrinkles, and other artifacts. When more than one
sensor is used, preprocessing Must also include registration and fusion procedures.
Many techniques have been developed to improve the quality of MMW images in this section, we
describe a technique for simultaneous noise suppression and object enhancement of passive MMW video
data and show some mathematical results. Denoising of the video sequences can be achieved temporally
or spatially. First, temporal denoising is achieved by motion compensated filtering, which estimates the
motion trajectory of each pixel and then conducts a 1-D filtering along the trajectory.
This reduces the blurring effect that occurs when temporal filtering is performed without regard to
object motion between frames. The motion trajectory of a pixel can be estimated by various algorithms
such as optical flow methods, block-based methods, and Bayesian methods. If the motion in an image
sequence is not abrupt, we can restrict the search to a small region in the subsequent frames for the motion
trajectory. For additional denoising and object enhancement, the technique employs a wavelet transform
Method that is based on multi scale edge representation. The approach provides more flexibility and
selectivity with less blurring. Furthermore, It offers a way to enhance objects in low-contrast images.
www.btechzone.com
www.btechzone.com
Let ?1 (x, y) and ?2 (x, y) be wavelets for x and y directions of an image, respectively. The dyadic wavelet
transform of a function f (x, y) at (x, y) is defined as
Then the vector contains the gradient information of f (x, y) at a point (x, y) the multiscaled edge
representation G 2J (f) of an image at a level j is obtained by the magnitude P 2J f (x, y) and 2 J f (x, y)
of the gradient vector
It is defined as
The multiscale edge representation G2 j( f ) denotes a collection of local maxima of the magnitude ?2 j
f(x, y) at a point (xi, yi) along the direction ?2 j f(x, y). The wavelet transform based denoising and
enhancement technique is achieved by manipulating G2 j(f ). By suppressing the noisy edges below a
predefined threshold in the finer scales, noise can be reduced while most of the true edges are preserved.
To avoid removing True edges accidentally in lower scales, where true edges generally become smaller,
variable thresholds can be applied depending on scales. Enhancement of the image contrast is performed
by stretching the multiscale edges in G2 j( f ). A denoised and Enhanced image is reconstructed from the
modified edges by the inverse wavelet transform; above Figure shows the results of this technique. In
above figure 5(a), which shows a frame taken from the sample video sequence, the concealed gun does
www.btechzone.com
www.btechzone.com
not show clearly Because of noise and low contrast. The images in Figure 5(b) show the denoised frame
by motioncompensated filtering. The frame was then spatially denoised and enhanced by the wavelet
transform methods. Four decomposition levels were used and edges in The fine scales were detected using
the magnitude and angles of the gradient of the multiscale edge representation. The threshold for
denoising was 15% of the maximum gradient at each scale. Figure 5(c) shows the final results of the
contrast enhanced and Demised frames. Note that the image of the handgun on the chest of the subject is
more apparent in the enhanced frame than it is in the original frame. However, spurious features such as
glint are also enhanced; higher-level procedures such as pattern Recognition has to be used to discard
these undesirable features.
Clutter filtering is used to remove unwanted details (shadows, wrinkles, imaging artifacts, etc.)
that are not needed in the final image for human observation, and can adversely affect the performance of
the automatic recognition stage. This helps improve the recognition performance, either operator-assisted
or automatic. For this purpose, morphological filters have been employed. Examples of the use of
morphological filtering for noise removal are provided through the complete CWD example given in
Figure. A complete description of the example is given in a later section.
As indicated earlier, making use of multiple sensors may increase the efficacy of a CWD system.
The first step toward image fusion is a precise alignment of images (i.e., image registration). Very little
has been reported on the registration problem for the CWD application. Here, we describe a registration
approach for images taken at the same time from different but Nearly collocated (adjacenand parallel)
sensors based on the maximization of mutual information (MMI) criterion. MMI states that two images
are registered when their mutual information (MI) reaches its maximum value. This can be expressed
mathematically as the following:
Where F and R are the images to be registered. F is referred to as the floating image, whose pixel
coordinates ( ˜x) are to be mapped to new coordinates on the reference image R. The reference image R is
to be resampled according to the positions defined by
The new coordinates Ta( ˜x), where T denotes the transformation model, and the dependence of T on its
associated parameters a is indicated by the use of notation Ta. I is the MI similarity measure calculated
over the region of overlap of the two images and Can be calculated through the joint histogram of the two
images the above criterion says that the two images F and R are registered through Ta* when a* globally
optimizes the MI measure, a twostage registration algorithm was developed For the registration of IR
images and the corresponding MMW images of the first generation. At the first stage, two human
silhouette extraction algorithms were developed, followed by a binary correlation to coarsely register the
two images. The purpose was to provide an initial search point close to the final solution For the second
stage of the registration algorithm based on the MMI criterion. In this manner, any local optimizer can be
employed to maximize the MI measure. One registration result obtained by this approach is illustrated
through the example Given in Figure 6.
www.btechzone.com
www.btechzone.com
The most straightforward approach to image fusion is to take the average of the source images,
but this can produce undesirable results such as a decrease in contrast. Many of the advanced image
fusion methods involve multi resolution image decomposition based on the wavelet transform. First, an
image pyramid is constructed for each source image by applying the wavelet transform to the source
images. This transform domain representation emphasizes important details of the source images at
different scales, which is useful for choosing the best fusion rules. Then, using a feature Selection rule, a
fused pyramid is formed for the composite image from the pyramid coefficients of the source images. The
simplest feature selection rule is choosing the maximum of the two corresponding transform values. This
allows the Integration of details into one image from two or more images. Finally, the composite image is
obtained by taking an inverse pyramid transform of the composite wavelet representation. The process
can be applied to fusion of multiple source imagery. This Type of method has been used to fuse IR and
MMW images for CWD application [7]. The first fusion example for CWD application is given in Figure
7. Two IR images taken from separate IR cameras from different viewing angles are considered in this
case. The advantage of image fusion for this case is clear since we can observe a complete gun shape only
in the fused image. The second fusion example, fusion of IR and MMW images, is provided in Figure
FIGURE 7: (a) and (b) are original I R images (c) is fused image
www.btechzone.com
www.btechzone.com
After preprocessing, the images/video sequences can be displayed for operator-assisted weapon
detection or fed into a weapon detection module for automated weapon detection. Toward this aim,
several steps are required, including object extraction, shape description, and weapon recognition.
www.btechzone.com
www.btechzone.com
1)Shape Description:
A) Moments:It defines six shape descriptors based on the second- and third-order normalized
moments that are translation, scale, and rotation invariant. The definitions of these six descriptors are
provided below:
``
Where ?p, q = (µp,q)/µ(p+q+2)/2 0,0 is the normalized central moment with µp,q = _(x- ¯x)p(y- ¯y)q
being the central Moment. The performance of these six moment-based shape descriptors are examined in
the next section. In addition to the moments of images, moments of region boundaries can be also defined.
Let the coordinates of the N contour pixels of the object be described by an ordered set (x(i ), y(i )), i = 1,
2, . . . , N. The Euclidean distance between the centroid, ( ¯x, ¯y) and the ordered sequence of the contour
pixels of the shape is denoted as d(i), i = 1, 2, . . . , N. This set forms a single-valued 1D unique
representation Of the contour. Based on the set d(i ), the pth moment is defined as
www.btechzone.com
www.btechzone.com
b) Circularity:
Mathematical Analysis:
To evaluate the performance of each individual shape descriptor, a test is designed based on the
available MMW video sequence. First, a set of 30 frames was selected from a Sequence of MMW data.
Objects from each frame were extracted using the SMP described previously. There were 166 total objects
extracted, among which 28 were weapons, by observing the original video sequence. To determine the
performance of Each shape descriptor, the probability of detection (PD) versus probability of false alarm
(PFA) is plotted by choosing different thresholds for each of the shape descriptors.
FIGURE 10(A) PD Versus PFA For C (0), (B) For SD (7) And SD (8), (C) SD (1) To
SD (6)
Figure 10(a) shows that when all the weapons are detected (PD = 1.00), the PFA is about 0.13. Figure
10(b) shows the results obtained when the FD-based measures SD7 and SD8 Are used. It shows that the
sum of the magnitude of the Fds results in better performance with less PFA than using the magnitude of
the combination of the positive and corresponding negative phases of the FDs. Finally, Figure 10(c)
shows the results of using moment-based shape measures to the set of objects. The plots of PD versus
PFA show that SD1 and SD2, which are based on second-order moments, are the worst behaved ones;
www.btechzone.com
www.btechzone.com
whereas SD3 through SD6, based on third order moments, are the best behaved ones and result in small
values of PFA while generating very close results.
Challenges:
There are several challenges ahead. One critical issue is the challenge of performing detection at a
distance with high probability of detection and low probability of false alarm. Yet another difficulty to be
surmounted is forging portable multisensor instruments. Also, detection systems go hand in hand with
subsequent response by the operator, and system development should take into account the overall context
of deployment.
Conclusion:
Imaging techniques based on a combination of sensor technologies and processing will potentially play
a key role in addressing the concealed weapon detection problem.In this Paper, we first briefly reviewed
the sensor technologies being investigated for the CWDapplication. Of the various methods being
investigated, passive MMW imaging sensors Offer the best near-term potential for providing a
noninvasive method of observing metallic and plastic objects concealed underneath common clothing.
Recent advances in MMW sensor technology have led to video-rate (30 frames/s) MMW cameras.
However, MMW cameras alone cannot provide useful information about the detail and location of the
individual being monitored. To enhance the practical values of passive MMW cameras, sensor fusion
approaches using MMW and IR, or MMW and EO cameras are being described. By integrating the
complementary information from different sensors, a more effective CWD system is expected. In the
second part of this paper, we provided a survey of the image processing techniques being developed to
achieve this goal. Specifically, topics such as MMW image/video enhancement, filtering, registration,
fusion, extraction, description, and recognition were discussed.
Refrences:
WWW.GOOGLE.COM
-DSP APPLICCATIONS
www.btechzone.com