100% found this document useful (1 vote)
107 views23 pages

Object Tracking

This document is a minor project report submitted by Rahul Paul in partial fulfillment of the requirements for a Master of Technology degree in Computer Science Engineering from the Indian School of Mines. The project addresses multiple object tracking under occlusion using Kalman filtering. The report includes an abstract, table of contents, introduction discussing related work on motion estimation, occlusion handling and object tracking, implementation details, results, and conclusions. It was completed in November 2014 under the guidance of Dr. Sushanta Mukhopadhyay.

Uploaded by

Rahul Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
107 views23 pages

Object Tracking

This document is a minor project report submitted by Rahul Paul in partial fulfillment of the requirements for a Master of Technology degree in Computer Science Engineering from the Indian School of Mines. The project addresses multiple object tracking under occlusion using Kalman filtering. The report includes an abstract, table of contents, introduction discussing related work on motion estimation, occlusion handling and object tracking, implementation details, results, and conclusions. It was completed in November 2014 under the guidance of Dr. Sushanta Mukhopadhyay.

Uploaded by

Rahul Paul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 23

Multiple Object Tracking Under

Occlusion
Minor Project Report
Submitted in partial fulfilment of the requirement
For the award of the degree of

MASTER OF TECHNOLOGY
IN
COMPUTER SCIENCE ENGINEERING
By
RAHUL PAUL
(Admission No. 2013MT0254)

Under the Guidance of


Dr. Sushanta Mukhopadhyay
Associate Professor

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


INDIAN SCHOOL OF MINES, DHANBAD 826004
NOVEMBER 2014

Department of Computer Science & Engineering


I.S.M. Dhanbad 826004 Jharkhand, INDIA

CERTIFICATE
This is to certify that the project entitled MULTIPLE OBJECT
TRACKING UNDER OCCLUSION submitted by RAHUL PAUL
(2012MT0253), department of Computer Science &
Engineering, Indian School of Mines, Dhanbad, in partial
fulfilment of the requirement for the award of degree in Master
of Technology in Computer Science Engineering from Indian
School of Mines Dhanbad, have successfully completed the
Minor project during the academic year 2012-2013.

(Dr. Chiranjeev Kumar)


Sushanto Mukhopadhyay)
Head of the Department
Associate Professor
Department of CSE
Department of CSE
Indian School of Mines
Indian School of Mines

( Dr.

ACKNOWLEDGEMENT
I wish to extend my gratitude to Dr. Sushanta Mukhopadhyay, Associate
Professor, Department of Computer Science and Engineering for her Cooperation, consistent guidance and valuable suggestions in various aspects of
this project work.
I am also thankful to Dr. Chiranjeev Kumar, Head of the Department of
Computer Science Engineering, Indian School of Mines, Dhanbad for his
valuable support. I am also thankful to Prof. G. P. Biswas , Prof. P.K. Jana and
all the faculty member of the department for providing time to time guidance
during this period. I want to thank all the members of the Department who
helped me towards the completion of my project.
Last of all, I thank to all of my family members and my teachers, without their
moral supports it would not be possible.

..

(Rahul Paul)

Dedicated to my beloved parents

ABSTRACT
In Video surveillance detection of moving objects from an
image sequence is very important for target tracking, activity
recognition and behaviour understanding. This report
presents an Object tracking method after removing the
temporal redundancy presents between the frames in video
sequence .Video Compression technology is the research
field of video compression coding attention. There are two
kinds of redundancies in a video sequence which aids in its
compression namely Temporal and Spatial. This paper mainly
focuses on the study of motion estimation and compression
algorithm to eliminate the temporal redundancy. The first
point of this article is how to improve the quality of the
reconstructed image motion estimation and compensation. It
is important to maintain the identity of multiple targets while
tracking them. In this paper an algorithm of feature based
using Kalman filter motion to handle multiple objects
tracking is proposed .This method is capable of tracking fast
and slow moving objects, objects that disappear and later
reappear . The algorithm proposed is validated on human
and vehicle image sequence.

Table of Contents
Acknowledgments
Dedication
Abstract
Chapter
1: Introduction
2 : Literature survey
2.1 Point Tracking
2.2 Kernel Tracking
2.3 Silhouette Tracking
3 : Related Work
3.1 Motion estimation
3.2 Gaussian Mixture Model
3.3 Non Linear Diffusion
3.4 Kalman Filter
3.5 Occlusion Problem
4 : Implementation and Results
4.1 Implementation Method
4.2 Results
5:
5.1 Conclusion
5.2 Reference

1. Chapter 1
1. Introduction
In recent years, the new video business continues to rise, such as video conference, video on
demand, remote monitoring, remote medical treatment, put forward higher requirements of
real-time transmission and video quality of video data, but the inherent limitations of Internet
and wireless network poses significant problems for video applications, but also to the video
data compression technology has put forward higher requirements. Efficient video
compression is the video data compression is a problem that must be solved . From the point
of view of information theory, video compression is achieved by removing redundant
information in video information, with a more close of video information description to
replace the original with redundant description. Statistical redundancy is redundant coding,
derived from the encoded the probability density distribution of video image non-uniformity,
can be removed by entropy coding. Structural redundancy and temporal redundancy and
spatial redundancy, mainly from the video image data space (intra) and temporal (inter)
correlation. The adjacent pixels in the same frame video image spatial correlation between
sources, and the similarity between the adjacent rows, can eliminate the spatial redundancy
using spatial transform and vector quantization. The similarity between time correlation
derived from video images of adjacent frames, namely the change between before and after
the video sequence frame in most of the region is slow, and the background is almost
unchanged, so the difference between adjacent frames is small, with very high similarity. The
motion estimation and compensation, can effectively remove the temporal redundancy in
video image. In video sequence, the inter temporal redundancy is far greater than the intra
frame spatial redundancy and the coding redundancy, therefore, motion estimation is very
important in the system of video compression coding module, it directly affects the efficiency
and the quality of video data compression coding. Motion estimation is more accurate, the
coding efficiency is higher, decoded video quality is better.
Moving objects tracking of video image sequences is one of the most important subjects in
computer vision. It has already been applied in many computer vision fields, such as, video
surveillance, artificial intelligence, safety detection, robot navigation etc. In recent years a
number of successful single object tracking system appeared ,but in the presence of several
objects the problem is one of multiple object tracking where targets and observations need to
be matched from frame to frame in a video sequence . The multiple object tracking is still a
challenging job.
Now to deal with these problems researchers did a lot of works and gained many good
achievements .Nguyen et al.[1] Used Kalman Filter in distributed tracking system for
tracking multiple moving objects .In [4] , a video surveillance system is proposed where
detection, recognition and tracking of object is carried out. However, multiple objects are

tracked by using the constant velocity Kalman algorithm. The performance of the
approach is dependent on the proposed detection and recognition algorithms. In another
work [], vector Kalman is proposed for tracking objects. In this paper, separate methods
for occlusion and merge are applied to handle the confusing situations. Further states of
the corresponding moving objects are searched using spiral searching prior to tracking.
In this paper, we use Kalman filter to establish object motion model, using the current
objects information to predict objects position, so that we can reduce the search scope
and search time of moving object to achieve fast tracking.
Segmenting foreground objects from a video sequence is a fundamental step in many
computer vision applications such as video understanding, video conferencing and traffic
monitoring. In this context, there are many issues (such as illumination changes,
reflections, shadows, objects that have been moved, sleeping foreground, and so on) that
make obtaining high accuracy foreground segmentation difficult and still subject to errors.
To tackle these problems, intensive research has been con- ducted. Among the proposed
methods, GMM based back- ground subtraction includes some robustness against changes
in the background. Nevertheless the popularity of this method, the background/foreground
discrimination still leaves rooms for further improvements .
Experimental results show the proposed method is able to ensure an efficient and robust
tracking with merge and split of multi-object.

2. Chapter 2
Literature Survey
The aim of an object tracker is to generate the trajectory of an object over time by locating its
position in every frame of the video. Object tracker may also provide the complete region in
the image that is occupied by the object at every time instant. The tasks of detecting the
object and establishing correspondence between the object instances across frames can either
be performed separately or jointly. In the first case, possible object regions in every frame are
obtained by means of an object detection algorithm, and then the tracker corresponds objects
across frames. In the latter case, the object region and correspondence is jointly estimated by
iteratively updating object location and region information obtained from previous frames. In
either tracking approach, the objects are represented using the shape and/or appearance
models described in Section 2. The model selected to represent object shape limits the type of
motion or deformation it can undergo. For example, if an object is represented as a point, then
only a translational model can be used. In the case where a geometric shape representation
like an ellipse is used for the object, parametric motion models like affine or projective
transformations are appropriate. These representations can approximate the motion of rigid
objects in the scene. For a non rigid object, silhouette or contour is the most descriptive
representation and both parametric and nonparametric models can be used to specify their
motion.
Types of Object Tracking
Object
Tracking

Point Tracking

Determinis
tic

Probabilist
ic

Kernel
Tracking

Multi-View
based

Template
based

Silhouette
Tracking

Contour
evolutio
n

Figure1: Taxonomy of tracking methods

2.1 Point Tracking

Shape
matchin
g

Objects detected in consecutive frames are represented by points and the association of
the points is based on the previous object state which can include object position and motion .
This approach requires an external mechanism to detect the objects in every frame .Point
tracking method is divided into two categories , namely ,Deterministic based and Probability
based approach .
Deterministic methods for point tracking define a cost of associating each object in frame t-1
to a single object in frame t using a set of motion constraints. Some of the deterministic based
tracker are MGE tracker(Salari and Sethi) , GOA tracker(Veenman et al.).
Probability based approach use the state space approach to model the object properties such
as position, velocity and acceleration . Measurements usually consists of the object position
in the image, which is obtained by a detection mechanism. Some of the probability based
tracker are Kalman filter (Broida and Chellappa) , JPDAF (bar-shalom and Foreman).
2.2 Kernel Tracking
Kernel Tracking is typically performed by computing the motion of the object,which is
represented by a primitive object region, from one frame to next . the Object motion is
generally in the form of parametric motion (translation, affine etc) or the dense flow
computed in subsequent frames .kernel tracking is divided into two categories ,namely,
Template matching and Multi-view based .
Template matching is a brute force method of searching the image I for a region similar to the
object template O, defined in the previous frame . The position of the template in the current
image is computed by a similarity measure .some template based tracker are Mean Shift
(Comaniciu et al.) KLT (Shi and Tomasi).
In the previous methods the appearance models,that is,histograms, templates etc., are usually
generated online.Thus these models represent the information gathered about the object from
the most recent observations. The objects may appear different from different views, and if
the object view changes dramatically during tracking, the appearance model may no longer
be valid, and the object track might be lost. To overcome this problem, different views
of the object can be learned ofine and used for tracking. Some of the multi view based
approaches are SVM tracker (Avidan) , Eigentracking(Black and Jepson).

2.1.3 Silhoutte Tracking


Objects may have complex shapes, for example, hands and shoulders that cannot be
well described by simple geometric shapes . silhouette based methods provides an accurate
shape description for these objects . The Goal of a silhouette based object tracker is to find
the object region in each frame by means of an object model generated using previous
frames. We divide silhouette tracker into two categories, namely shape matching and contour
matching . Shape matching approaches search for the object silhouette in the current frame.
Contour tracking approaches, on the other hand, evolve an initial contour to its new position
in the curren frame by either using the state space models or direct minimization of some
energy functional.

3. Chapter 3
RELATED WORK
3.1 Motion parameter Estimation
The underlying supposition behind motion estimation is that the patterns corresponding to
objects and background in a frame of video sequence move within the frame to form
corresponding objects on the subsequent frame .The main idea behind block matching is to
divide the current frame into non overlapping sub blocks with motion consistency and only
used for translational motion, which ignores the amplification, rotation and that the moving
object in the image pixel value is not changes over time .
The matching block in the reference frame position is that position block before
displacement current. The motion vector is the displacement from the current block to the
best matched block in the reference frame. Motion vector consists of a pair of horizontal and
vertical displacement values .usually a search window is defined to confine the search. The
same motion vector is assigned to all pixels within block. In motion compensation only
matching block and the residual pixel values can fully recover the pixel value of the current
block by block motion vector. Block matching method between the macro block is shown in
Figure 1.

Figure2. Block matching relationship between current and reference block and motion vector

3.2 Gaussian Mixture Model

Mixture Models are a type of density model which comprise a number of component
functions, usually Gaussian. These component functions are combined to provide a
multimodal density. They can be employed to model the colours of an object in order to
perform tasks such as real-time colour-based tracking and segmentation . These tasks may
be made more robust by generating a mixture model corresponding to background colours
in addition to a foreground model, and employing Bayes' theorem to perform pixel
classification.
A Gaussian Mixture Model (GMM) is a parametric probability density function
represented as a weighted sum of Gaussian component densities. GMMs are commonly

used as a parametric model of the probability distribution of continuous measurements or


features in a biometric system , such as color based tracking of an object in video .
Pixel values that do not fit the background distributions are considered foreground .This is a
common method for real time segmentation of moving regions in image sequences . Model
Gaussians are updated using K-means approximation method . Each Gaussian distribution is
assigned to represent the background or moving a moving object in the adaptive mixture
model. Every pixel is then evaluated and classified as moving region or as a background.
3.3 Non-Linear Diffusion

In image processing, Anisotropic diffusion is a technique aiming at reducing image noise


without removing significant parts of the image content,typically edges ,lines or other
details that are important for the interpretation of the image .The main theory behind
nonlinear diffusion is to use nonlinear PDEs to create a scale space representation that
consists of gradually simplified images where some image features are maintained or even
enhanced .here I try to implement anisotropic diffusion model proposed by Perona and
Malik. In their formulation, they replaced the constant diffusion coefficient of linear
equation by a smooth non increasing diffusivity function g with g(0) =1 ,g(s) 0 , and lim s
g(s) =0 . as a consequence, the diffusivities become variable in both space and time
.The Perona-Malik equation is
(1)

u/t = .(g(|u|)u)

With homogeneous Neumann boundary conditions and the initial condition u 0(x)= f(x) , f
denoting input image .
Perona and Malik suggests two different choices for the diffusivity function :
(2)

g(s) =

(3)

g(s) =

1+s2/ 2
e

s2/ 2

where corresponds to a contrast parameter . these functions share similar characteristics


and result in similar effects on the diffusivities .
3.4 Kalman Filter

The Kalman filter is useful for tracking different types of moving objects . It was
originally invented by Rudolf Kalman at NASA to track the trajectory of spacecraft. At its
heart,the Kalman filter is a method of combining noisy (and possibly missing)
measurements and predictions of the state of an object to achieve an estimate of its true
current state . Kalman filters can be applied to many different types linear dynamical
systems and the state here can refer to any measurable quantity, such as an objectss
location, velocity,temperature,voltage or a combination of these .The Kalman filter is a
recursive two-stage filter. At each iteration, it performs a predict step and an update step .
The predict step predicts the current location of the moving object based on previous
observations. For instance, if an object is moving with constant acceleration, we can
predict its current location, based on its previous location, -1 , using the equations of
motion .

The update step takes the measurement of the objects current location (if available), ,
and combines this with the predicted current location,
,to obtain an a posteriori
estimated current location of the object , .
The equations that govern Kalman filter are given below :
1. Predict stage :
i.
Predicted(a priori) state : |t-1 = Ft --1|t-1 + Bt ut
ii.
Predicted(a priori) estimate covariance : Pt|t-1 = Ft Pt-1|t-1 FtT +Qt
2. Update stage :
i.
Innovation or measurement residual : yt = zt Ht |t-1
ii.
Innovation (or residual) covariance : St = HtPt|t-1 HtT +Rt
iii.
Optimal Kalman gain : Kt = Pt|t-1 HtT St-1
iv.
Updated(a posteriori) state estimate : |t = |t-1 + Kt yt
v.
Updated (a posteriori) estimate covariance : Pt|t = (I - KtHt)Pt|t-1
Where, Xt is the current state vector, as estimated by the Kalman filter at time t .
zt is the measurement vector taken at time t.
Pt measures the estimated accuracy of Xt at time t.
F describes how the system moves(ideally) from one state to the next , i.e. how one
state vector is projected to the next , assuming no noise .
H defines the mapping from the state vector ,

, to the measurement zt

Q and R define the Gaussian process and measurement noise ,respectively , and
characterize the variance of the system .
B and u are control-input parameters are only used in systems that have an input ;
these can be ignored in the case of an object tracker .

Figure 3. Kalman filter in steps

3.5 Occlusion problem


In this paper, we use background subtraction method using GMM to detect and extract
moving objects , through detection results to determine whether there has a occlusion or
split between multiple objects .When we found by detection that multiple objectss
region connected together, we believe that the objects region have
merged, to make the multiple objects as a whole object to track, and to
establish a new eigenvalue for object matching. When a object contains
more than one moving object split into several independent moving
objects, first to judge whether merge occurred before, if it happend,
matching the split objects with objects feature before splitting. If not,
we consider the objects are news by splitting, new eigenvalues will be
established and new tracking windows will be assigned for moving
objects tracking.The Figure. 1 gives a two moving objects example for
illustration.
As shown in Figure 4, two moving objects TA and TB occlusion occurred in time k1 ,
they merged into to one object T in time k2 . Starting from time k2 , objcet T will be
tracked as a new moving object. During time k2 to time k3 , object TA . and TB also
will be updated as well while updating object T. When the update has finished, object T
turns to object T . Object Tstarts to split during time k3 to k4 and splits into 2 objects TA and TB .
TA

TA
T

T
TB

TB

k1

k2

k3

k4

Figure4 Illustration of merge and split

Then we make a judgment whether the object is T or not, if it is, we believe that the moving object
is the one which has merged before , then we match TA and TB with TA and TB in a certain range of
objects locations, establishing and updating the correspondence then delete object T. If it is not, the
splitted out new object is a new, to establish a new kalman filter motion model for the objects
respectively .
The algorithm is as follows :
1. If the current image is the first frame, establishing motion model and
assigning tracking window for each moving object in the scene. If the
current image is the kth frame and the moving object dont fall into any of
those established tracking windows , we consider it is a new object ,

establishing a new kalman filter motion model, initializing the model for
tracking.
2. To judge whether there is a occlusion happened , go to the merge or split
treatment .if not, keep tracking the object until it is disappeared .
3. Turning to the handling of the next frame until the object disappeared ,the
tracking is complete.

4. Chapter 4
4.1 SCHEME OF IMPLEMENTATION
There are 4 major steps in achieving object tracking by this method, and each of these steps
consists of many methods to improved results. These steps are follows:

Motion Estimation and compensation between frames


Non linear diffusion
Foreground segmentation using GMM
Tracking Kalman filter

The motion estimation and the motion compensation blocks work, only if there is a past
frame that is stored. So, question is how do we encode the first frame in a video sequence, for
which there is no past frame reference? The answer to this question is fairly straight forward.
We treat the first frame of a video sequence like a still image, where only the spatial, i.e the
intra-frame redundancy can be exploited. The frames that use only intra-frame redundancy
for coding are referred to as the intra-coded frames. The first frame of every video
sequence is always an intra-coded frame. From the second frame onwards, both temporal as
well as spatial redundancy can be exploited. Since these frames use inter-frame redundancy
for data compression, these are referred to as inter-coded frames.
The motion estimation block in a video codec computes the displacement between the current
frame and a stored past frame that is used as the reference. Usually the immediate past frame
is considered to be the reference .The difference in position between the candidates and its
match in the reference frame is defined as the displacement vector or more precisely, the
motion vector .After determining the motion vectors one can predict the current frame by
applying the displacements corresponding to the motion vectors on the reference frame. This
is the role of the motion compensation unit. The motion compensation unit therefore
composes how the current frame should have looked if corresponding unit therefore
composes how the current frame should have looked if corresponding displacements were
applied at different regions of the reference frame .
Now the motion compensated frames are there . Now Non linear diffusion process is applied
over the frames to reduce the noise level in the frames . The significant parts e.g. lines and
edges of the objects are not reduced by this process.
For tracking foreground objects from the subsequent frames at first we have to segment the
foreground object from the background. I have applied here Gaussian Mixture Model to
separate the foreground object from background. After successful separation of the

foreground object, we will obtain binary images .Binary images may contain numerous
imperfections. In particular, the binary regions produced by simple thresholding are distorted
by noise and texture. Morphological image processing pursues the goals of removing these
imperfections by accounting for the form and structure of the image.
The blob is detected now and it is tracked using Kalman filter. It consists of 2 states
prediction (here the states is predicted with the dynamic model) and correction(it is corrected
with observation model so that the error covariance of the estimator is minimized) .This
procedure is repeated for each time step with the state of the previous time as initial time .

Video file

Extract
frames
Tracking
objects
Motion
estimation
and
compensation

Reconstruction of
frame(temporal
redundancy
removed )

Non linear
Diffusion

Foreground
Segmentation
using GMM

Kalman Filter

Blobs
detected

Morphological
operator(Ope
n ,close)

Figure 5 :Implemented algorithm step wise

4.2 Experimental Results:


Figure 6 shows the motion estimated and motion compensated frames and the motion vector
between the frame 24 and 25.
A. Motion estimated and compensated frames

Fig. a

Fig.c

Fig. b

Fig. d

Figure 6: a)Frame 24 , b)Frame 25 , c) Motion vector , d) Motion compensated frame

Figure 7 shows the motion compensated frame after applying anisotropic diffusion. Here we
can see that the noise is reduce and edges and lines are not reduced.
B. Non-linear Diffusion

Figure 7: Diffusion applied on motion compensated frame


In figure 8 we can see the results for a video of moving car in an outdoor scene.
The car has been tracked properly. In the behind we can see a person moving .As
the person is not fully visible, the tracking is not happened there ,but we can see
the person in the segmented result.

C Tracking single object

fig a

fig b
Figure 8

Figure 9 shows the tracking results for a video of moving human bodies in the indoor
scene,the three persons are moving . In fig c and d we can see that two persons are crossing
each other and both of them have been tracked properly with their separate bounding box.
Occlusion is handling quiet well here.
D. Indoor moving human tracking

fig. a

fig.b

fig. c

fig. d
Figure 9

We use traffic sequence to verify the tracking method based on kalman filter,with the tracking
results as shown in figure 10, we can see that the method has a good tracking speed and it can
track fast moving object such as vehicle as well.
E .Tracking multiple car

fig. a

fig.b

fig.c

fig. d
Figure 10

5.Chapter 5
5.1 Conclusion :
In this paper we researched multi-target tracking algorithm based on Kalman filter. In
this paper before tracking commence, temporal redundancy between frames is removed by
using the motion estimation and compensation. To establish Kalman filter motion model,

to choose centroid and tracking window as the features. Selecting the center of mass
moving targets and tracking of the window as the feature. Through feature matching to
establish information links. The use of matching results update the model of moving
objects. using the updated model as the next frame of the input parameters. So that can be
achieved continuous tracking of objects. combining with obtained detection information to
judge whether there is a block between objects. the establishment of motion model to the
emerged object. establishing motion model to the separation of blocked moving objects.
And using the prediction of Kalman filter matching and tracking. Experimental selected
different scenes and objects, and effectiveness and robustness of the algorithm have been
proved .

5.2 Reference:
1. Hieu T Nguyen and Arnold W.M.Smeulders , Fast occluded Object Tracking by a Robust
Appearance Filter , IEEE Trans. Pattern Analysis and machine Intelligence , vol:26 n0. 8 ,
2004
2. Qiang Chen, Quan-Sen Sun ,Pheng Ann Heng and De-Shen Xia, Two-Stage Object
Tracking Method Based on Kernel and Contour , IEEE Trans. Circuits and Systems for
Video technology, vol:20, no. 4,2010
3. Chun-Te Chu, Jenq-Neng Hwang,Hung I Pai and Kung-Ming Lan ,Tracking Human
under Occlusion based on Adaptive Multiple Kernels with Projected Gradients , IEEE Trans.
Multimedia, vol:15, no. 7 ,2013
4. Osama Masoud and Nicolas P.Papanikolopoulos , A Novel Method for Tracking and
Counting Pedestrians in Real-Time using a Single Camera , IEEE Trans. Vehicular Tech. ,
vol:50, no. 5,2001
5. P.Perona and J Malik, Scale space and edge detection using anisotropic diffusion , IEEE
Trans. Pattern Analysis and Machine Intelligence , vol:12, 1990
6. Rubin Chen , Restoration Technique of Video Motion Image Estimation based on
Wavelet , Journal of Multimedia , vol:9 ,no. 5,2014
7. D Zhou and H Zhang Modified GMM background modelling and Optical Flow for
Detection of Moving Objects , International Conf. on Systems ,Man and Cybernatics,
3(5),2005
8 . Xin Li ,Kejun Wang, Wei Wang and Yang Li , A multiple Object tracking Method Using
Kalman Filter , IEEE International Conf. on Information and Automation,China,2010
9. Hajer Fradi and Jean-Luc Dugelay , Robust Foreground Segmentation Using Improved
Gaussian Mixture Model and Optical flow

10. G Medioni, I. Cohen, F. Bremond, S. Hongeng, and R. Nevatia, Event detection and
analysis from video streams, IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, no. 8, pp.
873889, Aug. 2001.
11. S. A. Vigus, D. R. Bull, and C. N. Canagarajah, Video object tracking using region
split and merge and a Kalman filter tracking algorithm,: In proceedings of ICIP, 2001,
pp. 650-653.

You might also like