Survey of Object Detection Approaches in Embedded Platforms: Ii. Literature Review
Survey of Object Detection Approaches in Embedded Platforms: Ii. Literature Review
Survey of Object Detection Approaches in Embedded Platforms: Ii. Literature Review
________________________________________________________________________________________________
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11, 2014
90
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
of tracking an object in real world reported in [18]. The services such as dynamic load distribution and task
RTOS is installed on embedded processor that executes reconfiguration[11].
the object detection algorithm. The template matching
Embedded platforms often provide limited resources so
approach is used to detect the object in image frame
the deployment of advanced computer vision method on
captured by CMOS camera sensor. ARM processor
it is very challenging [12]. Smart cameras combine
helps to meet the real time constraints but the
video sensing, processing, and communication on a
implementation is limited by the difference in size of
single embedded device which is equipped with a
foreground and background object.
multiprocessor computation and communication
One of the work focused on detection and counting of infrastructure. Multicamera tracking method focuses on
objects of a predefined type in an image frame using a fully decentralized handover procedure between
colour based segmentation and circular Hough transform adjacent cameras. The basic idea is to initiate a single
[20]. The problem of illumination poses limitation to tracking instance in the multicamera system for each
this algorithm as the colour of the objects in image object of interest. The tracker follows the supervised
frame is affected. The thresholding based algorithm is a object over the camera network, migrating to the camera
very basic technique for image segmentation and which observes the object. Thus, no central coordination
separate the foreground [28]. Now, the segmentation is required resulting in an autonomous and scalable
techniques based on colour property of an object [25], tracking approach. The CamShift algorithm is
Watershed methodology [26], graph based segmentation implemented to track the object in real world; the
[24][30] and region growing [27][29] are popularly used handover procedure is realized using a mobile agent
to improve the reliability in segmentation process. A system available on the smart camera network. The
watershed segmentation scheme based on the markers to system is successfully tested to track persons at college
provide better segmentation results proposed in [15]. campus. The algorithm is described to detect the small
However, there is over-segmentation and poor objects in FLIR (Forward Looking Infrared) image
efficiency. A rain-falling watershed algorithm to control sequences [9]. They used the wavelet-based filter to
the number of catchment basins by setting the drowning increase the robustness of their algorithm.
level criteria proposed in[16]. An efficient watershed
A 2-D adaptive lattice algorithm to improve the image
segmentation algorithm using the properties of Local
quality by removing clutter noise is mentioned in a
Intensity Minima (LIM) and Morphological Gradient
paper [14], which is computational expensive and hence
Direction (MGD) proposed [17]. The watershed
not suitable to be used for real-time applications.
methodology is an efficient technique to perform image
Moreover, the detected objects are too small here to find
segmentation where the grid lines divide individual
the complete contour information. Only the rough
catchment basins in a gradient image. A closed contour
regions of targets (called region of interest, ROI) can be
can be generated for each region in image frame [21]. A
obtained by using the thresholding operation. Hence, it
classifier based on morphology is used to detect the
is necessary to further extract the more exact contour
object in the area of interest [22]. The expected colour
information from the ROI by using an efficient
clusters are correctly identified by this method and
segmentation algorithm. The watershed transformation
segmentation of colour images is efficiently done by a
is a powerful tool for image segmentation where the
Markovian labelling but this complete implementation
watershed lines divide individual catchment basins in a
requires a lot of memory and time to execute [22]. Some
gradient image.
of the work have presented statistical model based
colour image segmentation algorithm. III. METHODOLOGIES
Smart cameras are equipped with a high-performance Viola-Jones algorithm is based on four stages, which
on-board computing and communication infrastructure, includes Haar Feature Selection, Creating an Integral
combining video sensing, processing, and Image, Adaboost Training, Cascading Classifier. Viola-
communications in a single embedded device. Jones has a very high detection rates which makes it
Establishing synchronization among many cameras very robust. Also for practical applications it processes
provide access to multiple views of an object and thus a at least 2 frames per second. It always gives very low
network of embedded cameras can potentially support false-positive rate. Viola-Jones is mainly used for face
more complex and challenging applications - including detection [1]. Table I gives more detailed discussion
smart rooms, surveillance, tracking, and motion analysis about the results.
than a single camera. A smart camera is a fully
embedded system, focusing on power consumption, QoS Feature Based: Method Feature points are used for
management, and limited resources is used. The camera object detection by detecting a set of features in a
is a scalable, embedded, high-performance, reference image, extracting feature descriptors, and
multiprocessor platform consisting of a network matching features between the reference image and an
processor and a variable number of digital signal input. This method of object detection can detect
processors (DSPs). Using the implemented software reference objects despite scale and orientation changes
framework, the embedded cameras offer system-level and is robust to partial conclusions [2].
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11, 2014
91
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
The template Matching: Median approximation 1. Noise: Additive Noise 10% and 1% of the pixels have
technique is used to track single object in a sequence of either the maximum or minimum value
frames either from a live camera or a previously saved
2. Small Rotation: Counter-clockwise rotation of 10
video. After object is detected Kalman filter with
degrees
template matching algorithm is used to track the same.
The templates are dynamically generated for this 3. Translation: Translation is done with 2 pixels left and
purpose. According to experiment and results it shows 2 pixels up
that the system is extremely fast, has robust performance
and in a very cost effective system [6]. 4. Small Rotation + Translation: Is combination of Small
Rotation and Translation
Blob Analysis Approach, the Blob detection detects
5. Rotation: Clockwise rotation of 20 degrees
regions in digital image which differ in properties such
as brightness, colour than surrounding. Blob detection Template Matching is done using normalized cross
can be categorised in two different ways one is correlation and Fourier transform + phase correlation.
differential method and other is local maxima. Laplacian Normalized cross correlation is much liked method.
of Gaussian is most common detector. Watershed based Normalized cross correlation:
analogy is also used in some papers for blob detection h w
[3]. i= −h j= −w X i, j T i, j
C u, v =
h w 2 h w 2
Table II, shown results from running three detector. i= −h j= −w X i, j i= −h j= −w T i, j
Detection rates shows that there is improvement in the
detection rate. Also it is seen that false positives rate is X i, j = x u + i, v + i − x
minimum. According to authors suggestions
independent detectors will show more improvement. With
Detectors
Viola-Jones 76.1% 88.4% 91.4% 92.0% 92.1% 92.9% 93.9%
Viola-Jones(Voting) 81.1% 89.7% 92.1% 93.1% 93.1% 93.2 % 93.7%
Rowley_Baluja-Kanade 83.2% 86.0% - - - 89.2% 90.1%
Schneiderman-Kanade - - - 94.4% - - -
Roth-Yang-Ahuja - - - - 94.8% - -
the 2001 IEEE Computer Society Conference on. [14] P. A. Ffrench, J. R. Zeidler, and W. H. Ku,
Vol. 1. IEEE, 2001. "Enhanced detectability of small objects in
corelated clutter using an improved 2-d adaptive
[2] Lienhart, Rainer, and Jochen Maydt. "An
lattice algorithm," IEEE Trans. on Image
extended set of haar-like features for rapid object
Processing, vol. 3, no. 6, pp. 383-397, 1997.
detection." Image Processing. 2002. Proceedings.
2002 International Conference on. Vol. 1. IEEE, [15] S. Beucher, "The watershed transformation
2002. applied to image segmentation," Proc.
Pfefferkorn Conf. on Signal and Image
[3] Nanda, Harsh, and Larry Davis. "Probabilistic
Processing in Microscopy and Microanalysis, pp.
template based pedestrian detection in infrared
299-314, Sep. 1991.
videos." IEEE Intelligent Vehicle Symposium.
Vol. 1. 2002. [16] P. D. Smet, R. Luis, and V. P. M. Pires,
"Implementation and analysis of an optimized
[4] Cucchiara, Rita, et al. "Detecting moving objects,
rainfalling watershed algorithm," Proc. Science
ghosts, and shadows in video streams." IEEE
and Technology Conf.: Image and Video
transactions on pattern analysis and machine
Communications and Processing, Jan. 2000.
intelligence 25.10 (2003): 1337-1342.
[17] S. E. Hernandez and K. E. Barner, "Joint region
[5] Srivastava, Hari Babu. "Image pre-processing
merging criteria for watershed-based image
algorithms for detection of small/point airborne
segmentation," Proc. IEEE International Conf. on
targets." Defence Science Journal 59.2 (2009):
Image Processing, vol. 2, pp. 108-111, 2000.
166.
[18] Sudhir D. Zaware, Prajwal G. Awade, Chinmay
[6] Rao, G. Mallikarjuna, and Ch Satyanarayana.
A. Joshi, R. V. Tornekar, " Image Processing
"Object Tracking System Using Approximate
Algorithm for Robotics on Embedded System,"
Median Filter, Kalman Filter and Dynamic
International Journal of Industrial Electronics and
Template Matching." International Journal of
Electrical Engineering, ISSN: 2347-6982
Intelligent Systems and Applications 6.5 (2014):
Volume-2, Issue-12, Dec.-2014
83.
[19] Viola, Paul. "Feature-based recognition of
[7] Hsieh, Feng-Yang, et al. "A novel approach to
objects." Proc. of the AAAI Fall Symposium on
noise removal and detection of small objects with
Learning and Computer Vision. 1993.
low contrast." Engineering, WIAMIS (2004).
[20] Hussin, R., et al. "Digital image processing
[8] Ivanov, Denis. "Implementation of Object
techniques for object detection from complex
Detection Algorithm on Low Performance
background image." Procedia Engineering 41
Embedded Systems."
(2012): 340-344.
[9] Litzenberger, M., et al. "Embedded vision system
[21] L. Vincent and P. Soille, "Watersheds in digital
for real-time object tracking using an
spaces: An efficient algorithm based on
asynchronous transient vision sensor." 12th
immersion simulations," IEEE Trans. on Pattern
Digital Signal Processing Workshop & 4th IEEE
Analysis and Machine Intelligence, vol. 13, no. 6,
Signal Processing Education Workshop. IEEE,
pp. 583-598, Jun. 1991.
2006.
[22] Géraud, Thierry, P-Y. Strub, and Jérôme Darbon.
[10] Kiritsis, Dimitris, Ahmed Bufardi, and Paul
"Color image segmentation based on automatic
Xirouchakis. "Research issues on product
morphological clustering." Image Processing,
lifecycle management and information tracking
2001. Proceedings. 2001 International
using smart embedded systems." Advanced
Conference on. Vol. 3. IEEE, 2001.
Engineering Informatics 17.3 (2003): 189-202.
[23] Cho, Wan Hyun, Soon Young Park, and Jong
[11] Bramberger, Michael, et al. "Distributed
Hyun Park. "Segmentation of color image using
embedded smart cameras for surveillance
deterministic annealing EM." Pattern
applications." Computer 39.2 (2006): 68-75.
Recognition, 2000. Proceedings. 15th
[12] Quaritsch, Markus, et al. "Autonomous International Conference on. Vol. 3. IEEE, 2000.
multicamera tracking on embedded smart
[24] Felzenszwalb, Pedro F., and Daniel P.
cameras." EURASIP Journal on Embedded
Huttenlocher. "Efficient graph-based image
Systems 2007.1 (2007): 1-10.
segmentation." International Journal of Computer
[13] D. Davies, P. Palmer and M. Mirmehdi, Vision 59.2 (2004): 167-181.
"Detection and tracking of very small low
[25]
Chiu, Kuo-Yu, and Sheng-Fuu Lin. "Lane
contrast objects," Proc. British Machine Vision
detection using color-based segmentation." IEEE
9th Conf, pp. 599-608, Sep. 1998.
Proceedings. Intelligent Vehicles Symposium,
2005, IEEE, 2005.
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11, 2014
93
International Journal of Recent Advances in Engineering & Technology (IJRAET)
________________________________________________________________________________________________
[26] Levner, Ilya, and Hong Zhang. "Classification- segmentation." Computer Vision, Graphics, and
driven watershed segmentation." IEEE Image Processing 52.2 (1990): 171-190.
Transactions on Image Processing 16.5 (2007):
[29] Fan, Jianping, et al. "Automatic image
1437-1445.
segmentation by integrating color-edge extraction
[27] Tang, Jun. "A color image segmentation and seeded region growing." IEEE transactions
algorithm based on region growing." Computer on image processing 10.10 (2001): 1454-1466.
engineering and technology (iccet), 2010 2nd
[30] Grundmann, Matthias, et al. "Efficient
international conference on. Vol. 6. IEEE, 2010.
hierarchical graph-based video segmentation."
[28] Lee, Sang Uk, Seok Yoon Chung, and Rae Hong Computer Vision and Pattern Recognition
Park. "A comparative performance study of (CVPR), 2010 IEEE Conference on. IEEE, 2010.
several global thresholding techniques for
________________________________________________________________________________________________
ISSN (Online): 2347 - 2812, Volume-2, Issue -11, 2014
94