0% found this document useful (0 votes)
20 views7 pages

Daily Activity Recognition Based Principal Component Classification

vThe ability to recognize human activities from sensed information is very important for ubiquitous computing applications using smart identification technologies. This paper addresses a new discriminative method, namely Principal Components Classification which combines the classical Principal Components Analysis with the correlation criterion for performing activity recognition in a smart home.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views7 pages

Daily Activity Recognition Based Principal Component Classification

vThe ability to recognize human activities from sensed information is very important for ubiquitous computing applications using smart identification technologies. This paper addresses a new discriminative method, namely Principal Components Classification which combines the classical Principal Components Analysis with the correlation criterion for performing activity recognition in a smart home.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/235701636

Daily Activity Recognition based Principal Component Classification

Conference Paper · November 2011

CITATIONS READS
0 113

3 authors:

M'hamed Bilal Abidine Belkacem Fergani


University of Science and Technology Houari Boumediene (USTHB) University of Science and Technology Houari Boumediene
61 PUBLICATIONS 339 CITATIONS 80 PUBLICATIONS 621 CITATIONS

SEE PROFILE SEE PROFILE

Laurent Clavier
Institut Mines-Télécom
162 PUBLICATIONS 1,396 CITATIONS

SEE PROFILE

All content following this page was uploaded by Belkacem Fergani on 28 May 2014.

The user has requested enhancement of the downloaded file.


Daily Activity Recognition Based on Principal Components Classification

M’hamed Billel Abidine, Belkacem Fergani Laurent Clavier


Faculty of Electronics and Computer Sciences IEMN - IRCICA
USTHB TELECOM Lille1
Algiers, Algeria France
E-mail: {abidineb, bfergani}@hotmail.com E-mail: [email protected]

Abstract—The ability to recognize human activities from State of the Art methods used for recognizing activities
sensed information is very important for ubiquitous computing can be divided in two categories: generative models and
applications using smart identification technologies. This paper discriminative models [5]. The generative methods perform
addresses a new discriminative method, namely Principal well but require data modeling and are generally time
Components Classification which combines the classical consuming. Discriminative ones are model free and more
Principal Component Analysis with the correlation criterion efficient but require generally also a large training data for
for performing activity recognition in a smart home. Our supervised algorithms which in turn are also time and
method is much easier to compute and performs as well as the computer memory consuming especially for large datasets
state of art algorithms like support vectors machines and k-
such as WSN data.
Nearest Neighbor with less computational loads and parameter
Motivated by the needs of activity recognition problems,
settings. We conduct several experiments, demonstrate its
achievement and show its promising results using real world
and in order to overcome the generative models drawbacks
datasets. and point out the advantage of the discriminative ones, we
have developed in this paper, an effective supervised
Keywords-activity recognition; ubiquitous computing; smart classification method based on Principal Component
home; sensors network; machine learning Analysis [6] and compared it with two State of the Art
methods used in researches in the activity recognition field.
I. INTRODUCTION Our method is a discriminative approach and tries to find an
optimal linear hyperplan to best discriminate binary classes
In 2030, nearly one out of two households will include according to a correlation criterion. We have conducted
someone who needs help performing basic activities of daily several experiments using real ADL datasets in order to
living (ADL) [1]. Sensor-based technologies in the home is achieve this effectiveness and show its superiority in terms
the key of this problem. Particularly in elderly care, ADL of time computing and complexity.
are used to assess the cognitive and physical capabilities of The rest of the paper is organized as follows. Section ΙΙ
an elderly person [2]. presents related works in activity recognition. Section ΙΙΙ
The ability to identify the behaviour of people in a Smart describes existing algorithms used in the activity
home is at the core of ubiquitous computing applications. recognition field and our approach in details. Section VΙ
Smart systems are equipped with sensor networks able to describes the data set used in this paper and discusses the
automatically recognize activities about the occupants and experimental results. Finally, Section V concludes by
assist humans. They must be able to recognize the ongoing summarizing our findings.
activities of the users in order to suggest or take actions in an
intelligent manner [3]. II. RELATED WORK
The sensor data collected are usually analysed using data Activity recognition has been performed using different
mining and machine learning techniques to build activity types of data. We can cite for example, state change sensors
models and perform further means of pattern recognition [7], motion detectors [8], cameras [9], accelerometers [10],
[4]. In this approach, sensors can be attached to either an RFID tags and sensors [3][11], electrical signatures [12],
actor under observation or objects that constitute the GPS [13] and various types of sensors [14]. These
environment. technologies include different levels of complexity and
Recognizing a predefined set of activities is a technological challenges in terms of price, intrusiveness,
classification task: features are extracted from the space and installation and the type of data they output [3][15].
time information collected in the sensor data and then used The ubiquitous computing community has many
for classification. Feature representations are used to map interesting and creative ideas of how activity recognition
the data to another representation space with the intention of can be applied. Activity recognition supervised models are
making the classification problem easier to solve. In most based on large annotated datasets. Annotation has been
cases, a model of classification is used that relates the performed in many different ways. The least interfering
activity to sensor patterns. The learning of such models is method is to use cameras [8]. Other examples are self-
usually done in a supervised manner and requires a large reporting methods, keeping an activity diary on paper or
annotated datasets recorded in different situations. using a PDA [16].
Methods used for recognizing activities can be divided In this case, we will construct binary classifiers to
in two categories: generative models (e.g., Hidden Markov differentiate the classes Ci and Cj, 0  i  N and 0  j  i .
Models (HMM) [3], Naive Bayes Classifier (NBC) [7]) and To determine the class of a test point (see Figure 1, in case
discriminative models (e.g., Support Vector Machines N  3 ), a majority voting is used. The decision is given
(SVM) [14], Conditional Random Fields (CRM) [3] and k-
Nearest Neighbor (k-NN) [17]). In this study, our goal is to by: C  max Card ({ y i , j }  {c}) where yi , j is the decision
c 1... N
develop a new discriminative method, named Principal given, for this new point, by the SVM trained to distinguish
Components Classification (PCC) based on Principal between the classes i and j.
Component Analysis which permits to find an optimal The time complexity of building an OvO-SVM is
hyperplan according to a correlation criterion. We compare O(m2*n) [20] and the test time complexity of a d point is
its recognition performance to two discriminative methods: O(N*d*n*msv) [20].
SVM and k-NN. We show its performance superiority in
terms of time consuming and computing. B. k-Nearest Neighbor (k-NN)
The k-nearest neighbor algorithm is amongst the
III. METHODS FOR RECOGNITION ACTIVITY simplest of all machine learning algorithms [22], and
therefore easy to implement. The m training instances
A. Support Vector Machines (SVM) n
x  R are vectors in an n-dimensional feature space, each
A support vector machine method formulates the
with a class label. In the k-NN method, the result of a new
training problem as one that finds the separating hyperplane
that maximizes the distance between the closest elements of query is classified based on the majority of the k-NN
the two classes (named support vectors). The SVM have the category. The classifiers do not use any model for fitting
general form of the decision function for classifying a test and are only based on memory to store the feature vectors
n
and class labels. They work based on the minimum distance
point x  R [4] from an unlabelled vector (a test point) to the training
msv instances to determine the k-NN. k (positive integer) is a
f ( x )  sgn   i yi K ( x, xi )  b . (1) user-defined constant. Usually Euclidean distance is used as
 i1  the distance metric.
n Example of k-NN classification is illustrated in Figure 2.
Where the msv is the number of support vectors xi  R The test sample (square) should be classified either to the
are obtained from the training sets,  i  0 are Lagrange first class of triangle C1 or to the second class of circles C2.
If k = 3, it is assigned to the second class because there are
parameters, yi are class labels either 1 or -1, and K(.,.) is the
2 circles and only 1 triangle inside the inner circle.
kernel function. The kernel function can be of various types
[4]. In this paper, the radial basis kernel function (RBF) is

used K ( x, y )  exp  12 x  y  where  is the shape


2

 2 
parameter. The construction of such functions is described
by the Mercer conditions [18].
A Multiclass pattern recognition system [19] can be
obtained from two-class SVM. We selected the One-Versus-
One method (OvO) because it does not require larger
training datasets to solve more complex problems and fewer
classifiers were distorted with unbalanced training sets
Figure 2. Example of 3-NN classification [5].
compared to the One-Versus-All (OvA) [20]. This scheme
consists in constructing N ( N  1) / 2 classifiers, using all
the pairwise combinations of the N classes as illustrated in The k-NN is easy to implement. The computational load
Figure 1. of the k-NN classification for d test point is O(d*n*m) [5].
C. Principal Components Classification (PCC)
The method developed in this paper is based on the
principle of the PCA [5][6] that is a standard linear
technique for dimensionality reduction. The goal of PCA is
to find an orthonormal subspace whose basis vectors
correspond to the directions with maximal variances, i.e.,
the main components that best describe the scatter of all the
projected samples, see Figure 3.
Figure 1. OvO-SVM classification of three linearly separable classes
( N  3 ) [14].
and C2. In Figure 4, the first main component is V1 but the
optimal main component is V2 .

Figure 3. Principal components of two-variable data set.

n Figure 4. Classification PCC between two classes using optimal


Consider a random vector x  R with m observations separating hyperplane (Dopt) with V2 is Vopt.
m*n
xi , i  [1, ..., m ]. In PCA, data matrix X  R
are first centered x  x  ( X ) with The search of the optimal main component Vopt is
 ( X )  i  E{ X i } | i  1,..., n. Then PCA diagonalizes determined by the maximum of correlation (Corr) between
the covariance matrix Cov(X)  
vector f i ( x )  f 1,i ( x ), f 2 ,i ( x ),..., f m ,i ( x ) | i  1,..., n and

1 X T X. the original vector classes y  y1 , y 2 ,  , y m , such that


Cov( X )  (2)
m 1 Max ( abs ( Corr ( f i ( x ), y ))). The Pearson’s correlation
This problem leads to solve the eigenvalue equation coefficient is used
 XY (5)
λV = Cov(X) V. (3) Corr ( X  f i ( x ), Y  y )  .
 XY
||V|| = 1.
where  XY is the covariance between X and Y and
Thus, it was thought to exploit this interesting property  X ,  Y are the standard deviation of X, Y, respectively.
that is principal components for data classification in the The correction factor sign (S) in (6) is introduced to
supervised binary case: yi   1, 1 where -1 is class label respect the choice of the label vector y with -1 is class label
C1 and +1 is class label C2. In PCA, the first main C1 and +1 is class label C2. The factor S is obtained by the
component retains the most information about data, but it is sign of the correlation between the optimal
not necessarily the optimal component Vopt that provides the function f (Vopt ) / s 1 ( x ) given in (6) and y. This factor becomes
optimal hyperplane (Dopt) for data discrimination. Hence,
negative (resp. positive) if the two vectors
our method named Principal Components Classification
(PCC) consists in searching a vector Vopt in an eigenvector f (Vopt ) / s1 ( x ) and y are in the opposite direction (resp.
basis for optimally classifying data. Our classification collinear), see Figure 5. however, the decision function of
method requires a learning phase with training optimal hyperplan (Dopt) is
dataset  x1 , y1 ,  x 2 , y 2 ...  x m , y m  . This phase is
achieved as follows: we construct a component vector basis f (Vopt ) ( x )  S * Sgn ( x.Vopt ) with S  {-1,+1}. (6)
V  Vi | i  1,..., n. We then search the decision function
given in ( 4) of separation hyperplane (D) for each Vi by
projecting the data on each component ( xVi ) and taking the
sign such that

f i ( x )  Sgn ( x.Vi ). (4)

where Sgn is the sign function


( Sgn( z )  1 if z  0, Sgn( z )  1 otherwise ) .
Figure 5. Classification PCC between two classes using optimal
Then, we search the optimal separation hyperplane (Dopt) separating hyperplane (Dopt). (a) show the case when correlation is negative
n
which correspond the optimal main component Vopt  R between f(V ) ( x ) and y, (b) correlation is positive.
opt
that gives the best discrimination between two classes C1
After choosing the optimal major component Vopt, we N ( N  1)
search the optimal factor translation  opt  R of our binary PCC classifiers are developed. The time
2
separation hyperplane that provides a good separation complexity of building a binary PCC is O m *n2). Therefore,
between classes. The choice of  opt parameter is determined the training time for OvO-PCC is:
by taking the maximum of correlation between f (Vopt , ) ( x ) train N ( N  1) 2 N ( N  1) m 2 2
TOvO  mn  n  Nmn . (9)
given in (7) and y , i.e., Max ( abs(Corr ( f (Vopt , ) ( x ), y ))), 2 2 N
such that
The test time complexity of a d point is:

f (Vopt , ) ( x )  S * Sgn ( x.Vopt   ). (7) test N ( N  1) 2


TOvO  dn  N dn. (10)
2
where the optimal translation factor value is set in the range
  [1 , ..., 2 ] with  2  1  abs (max(xVopt )) / 2 . Therefore, the time complexity of building a OvO-PCC
is O(N*m*n2) and the testing time is O(N2* d *n).
The square test sample (see Figure 5) should be
classified with the decision function IV. EXPERIMENTAL RESULTS
In this section, we first give the details of our
f (Vopt , opt ) ( x )  S * Sgn ( x.Vopt   opt ). (8) experimental setup, and then describe the dataset utilized in
them. Finally, we present the acquired results.
The entire PCC training algorithm and the time
complexity for each step is summarized below A. Setup and Performance Measures
We separate our data into a test and a training set using a
TABLE I. PCC TRAINING ALGORITHM AND TIME COMPLEXITY “leave one day out cross validation” approach [3]. In this
approach, a classifier is designed using (l - 1) days and
evaluated on the one remaining day; this is repeated l times,
Algorithm steps :
Step1. Compute the covariance matrix Cov (X) from training data
with different training sets of size (l - 1) and report the
using (2). O(m*n2) average performance measure. In this way, we get inferred
labels for the whole dataset by concatenating the results
 
Step2. Evaluate eigenvectors Vi , i  1...n using (3). O( n3),
acquired for each day.
see [23] for more details. We evaluate the performance of our model by measuring
Step3. Compute the decision function for each eigenvector Vi using the accuracy and the class average accuracy. These
(4). O(m*n2) measures are defined as follows
Step4. Determine the optimal eigenvector Vopt using
Max ( abs ( Corr ( fi ( x ), y ))). O(m*n) in1[inf erred (i ) true(i )]
Accuracy : n . (11)
Step5. Find correction factor sign S of the function in (6).
 If Corr( f( Vopt) / s 1 ( x ), y )  0 , S = -1. O(1)
Class :
nc 
1 N  i 1 inf erred c ( i )  truec (i ) 

 (12)
 Otherwise, S = +1. O(1)
N c 1
 nc .
Step6. Find the parameter   [1 , ...,  2 ] using

Max ( abs (Corr ( f(Vopt , ) ( x ), y ))). O(m*n) in which [a = b] is a binary indicator giving 1 when true and
0 when false. n is the total number of samples, N is the
For Multiclass discrimination, the One-Versus-One number of classes and nc the total number of samples for
strategy is used to classify between each pair with the binary class c.
decision function given in (8). Sensors output are binary and represented in a feature
The time complexity of a binary PCC training algorithm space which is used by the model to recognize the activities
is O(m*n2)+O(n3) (O(m*n2) when m>>n) corresponding to performed. The “raw” sensor representation gives a 1 when
the maximum complexity steps in Table Ι and the test time the sensor is firing and a 0 otherwise. According to [3], the
complexity of a d point is O(d*n). In order to compute the “change point” and “last” feature representation give better
time complexity of training phase and testing phase of OvO- results than using “raw” sensor data directly. The “change
PCC, we assume m>>n and that the number of data points in point” representation gives a 1 when the sensor reading
changes and the “last” representation gives a 1 for the
m sensor that changed state last and 0 for all other sensors.
each class is approximately the same i.e., m  . To solve
N
a N-class problem using conventional OvO,
B. Database When comparing the accuracy in Table ΙΙΙ, we see that
We used Kastersen’s realworld dataset [3] in order to the SVM method gives the best results. The SVM hyper-
recognize ADL. This datasets were all recorded using a parameters (C, σ) has been optimized to maximize the
wireless sensor network in homes with a single occupant in accuracy of cross validation technique with 5-fold (l=5).
the house of a 26-year-old man. He lives alone in a three- The hyper-parameters (C, σ) are set to the optimal values in
room apartment where 14 digital state change sensors were the range (1-1000) and (0.1-1.5), respectively.
installed. In Table ΙV, the classes “Showering”, “Breakfast”,
The dataset consists of 245 actions for 7 different “Dinner” and “Drink” are less successfully recognized for
activities over l = 28 days, sensed using RFID technology. SVM method. k-NN method achieves the overall highest
A list of activities that were annotated with information class accuracy. We used cross-validation technique with 5-
about class distribution can be found in Table ΙΙ. Annotation fold to select the best value of k parameter. The parameter
was done by the subject himself at the same time the sensor k=5 is set to the optimal value in the range (1-20).
data was recorded. A Bluetooth headset combined with For k-NN method, the activities “Breakfast”, “Dinner”
speech recognition software was used for annotation. All and “Drink” perform worst. Its accuracy is slightly lower
unannotated time slices are collected in a single activity than the other methods. It has to be noted that k-NN showed
annotated “Idle”. reasonable recognition performance results.
Table ΙΙΙ shows that our method PCC has good results
TABLE II. NUMBER (NB) OF ACTIONS, NUMBER (NB) OF with 95.1% for accuracy. It mainly improves the recognition
OBSERVATIONS AND PERCENTAGE OF TIME ACTIVITIES OCCUR IN THE of the “Idle” activity but it has worse results than the others
HOUSE DATASET
for class accuracy, see Table ΙV. We see that it completely
ADL Nb of Nb of Percentage fails to classify activities “Showering” and three kitchen
actions observations of time (%) activities “Breakfast”, “Dinner” and “Drink”. This is mainly
Idle - 4627 11.5 due to the fact that they are less represented in the training
Leaving 34 22617 56.4
database.
Toileting 114 380 1.0
Showering 23 265 0.7
The multi-class classification algorithms have also to be
Sleeping 24 11601 29.0 compared by their computational loads (i.e., training and
Breakfast 20 109 0.3 testing steps) using the “leave one day out cross validation”.
Dinner 10 348 0.9 We can see from Table V that our method mainly gives
Drink 20 59 0.2 better results than the others: it is easy to compute and fast
to run. Table V gives also an indication of the Run Time (T)
C. Results
required by each method on a Pentium Core 2 DuoT1350,
We compared the performance of the SVM with 1.86 GHz clock and 1GB random memory.
Gaussian kernel, k-NN and our method PCC on the dataset. The reason of these results is that
We tested these algorithms under Matlab environment. 2
SVM algorithm is tested with implementation LibSVM TSVM ( train test )  m n  Ndnmsv , Tk  NN test  dnm and
[24]. 2 2
TPCC ( train test )  Nmn  N dn, where N=8, n=14. In case
Experiments were run using the “Changepoint+Last”
representation that is a concatenation of the (C.P) and (L.C) of the test method “leave one day out cross validation”, we
representation. The results obtained with this representation have m  38577 and d  1429. Therefore,
in terms of accuracy and class accuracy for different TPCC  Tk  NN  TSVM .
methods are shown in Table ΙΙΙ and Table ΙV, respectively.
TABLE V. COMPUTATIONAL LOADS FOR SVM, K-NN AND PCC IN
TABLE III. DETECTION ACCURACY FOR SVM, K-NN AND PCC THE TRAINING AND TESTS STEPS.

Methods Accuracy (%) Methods Computational T(s)


SVM 95.5 loads (training+test)
k-NN 94.2
SVM m2 * n + N* d* n* msv 852
PCC 95.1
k-NN d*n* m 678
TABLE IV. CLASS ACCURACY FOR SVM, K-NN AND PCC PCC N* m* n2 + N2 *d* n 402

ADL SVM (%) k-NN (%) PCC (%) V. CONCLUSION AND FUTURE WORK
Idle 83.2 68.6 87.0 In this paper, a new discriminative supervised
Leaving 98.4 98.4 98.4 classification method for activity recognition has been
Toileting 78.9 75.0 50.7 presented. This method based on PCA and a correlation
Showering 28.6 81.5 1.9 criterion performs as well as the main state-of-the art
Sleeping 99.6 99.6 99.5 methods like SVM and k-NN. We have shown in this paper
Breakfast 40.3 28.4 5.5 that the proposed algorithm requires a low computational
Dinner 20.9 41.6 1.1 complexity of training O(N*m*n2) and testing O(N2*d*n),
Drink 38.9 38.9 00.0
Total 61.1 66.5 43.0
comparatively with other methods when m>>n and less [16] S. S. Intille, E. M. Tapia, J. Rondoni, and al., “Tools for
parameters settings than SVM. studying behavior and technology in natural settings,” In
Ubicomp, pages 157–174, 2003.
The PCC method is more sensitive to a dominant class
[17] S. W. Lee and K. Mase, “Activity and location recognition
than the others methods, but in general, without considering using wearable sensors,” IEEE Pervasive Computing, Vol.1,
the context of class, the global recognition rate obtained No.3, pp.24-32, 2002.
with PCC is good and results are promising. This method [18] J. Mercer, “Functions of positive and negative type and their
seems to be best suited for the WSN applications by connection with the theory of integral equations,”
distributing the processing among the sensor nodes to Philosophical Transactions of the Royal Society of London.
reduce communication costs and therefore extending the Series A, vol. 209, pp. 415 – 446, 1909.
lifetime of the network. In the future, we will improve our [19] C. Hsu and C. Lin, “A comparison of methods for multiclass
method in terms of class accuracy using hierarchical support vector machines,” IEEE Trans. Neural Networks,
13(2):415–425, 2002.
decision making methods or rebalancing data classes
[20] M. Bala and R. K. Agrawal, “Optimal Decision Tree Based
methods. Multi-class Support Vector Machine,” School of Computer &
Systems Sciences, Informatica 34 : 197–209, India, 2010.
REFERENCES [21] L. Lin, D. Gang, “A multiple classification method based on
[1] R. W. Johnson, Chronic care in america: A 21st century the cloud model,” Neural Network World, 20(5):651-666,
challenge. Technical report, Robert Wood Johnson China, 2010.
Foundation, 1996. [22] O. Kwon and J. Lee, “Text categorization based on k-nearest
[2] S. Katz, T. D. Down, H. R. Cash, Progress in the development neighbor approach for Web site classification,” Information
of the index of ADL. Gerontologist, 10:20–30, 1970. Processing and Management, 39, pp. 25-44, 2003.
[3] T. V. Kasteren, A. Noulas, G. Englebienne, B. Krose, [23] N. N. Schraudolph, S. Günter, and S.V. N. Vishwanathan.
“Accurate activity recognition in a home setting,” In Proc. the “Fast iterative kernel PCA,” In B. Schölkopf, J. Platt, and T.
10th Int. Conf. Ubiquitous Computing, Tokyo, Japan, Sept. 11- Hoffman, editors, Advances in Neural Information Processing
14, pp.1-8, 2008. Systems, volume 19, pages 1225--1232, Cambridge, MA, MIT
[4] C. Bishop, Pattern Recognition and Machine Learning. Press, 2007.
Springer. New York, ISBN: 978-0-387-31073-2, 2006. [24] C. C. Chang and C. J. Lin, LIBSVM: a library for support
[5] R. O. Duda, P. E. Hart, and D.G. Stork. Pattern Classification vector machines. [Online]. Available:
(2nd edition) John Wiley and Sons, 2000. http://www.csie.ntu.edu.tw/~cjlin-/libsvm/
[6] I. Jolliffe, Principal component analysis, Springer Verlag,
New York, 1986.
[7] E. M. Tapia, S. S. Intille, and K. Larson, “Activity recognition
in the home using simple and ubiquitous sensors,” in Proc.
Pervasive Computing, Vienna, Austria, pp. 158 – 175, April
18-23, 2004.
[8] B. Logan, J. Healey, M. Philipose, E. M. Tapia and S. S.
Intille, “A long-term evaluation of sensing modalities for
activity recognition,” In Ubicomp ’07, 483–500, 2007.
[9] W. Lao, J. Han, P. H. N. de With, “Automative video-based
human motion analysis for consumer surveillance system,”
IEEE Trans. Consumer Electronics, 55(2): 591-598, 2009.
[10] J. Lester, T. Choudhury, N. Kern, G. Borriello, B. Hannaford.
“A hybrid discriminative/generative approach for modeling
human activities,” In Proc. Int. Joint Conf. Artificial
Intelligence, Edinberg, UK, pp.766-772, Jul. 30-Aug. 5, 2005.
[11] M. Philipose, K. P. Fishkin, M. Perkowitz, D. J. Patterson, D.
Hahnel, D. Fox & H. Kautz, Inferring Activities from
Interactions with Objects, IEEE Pervasive Computing, Vol.3,
No.4, pp. 50-57, 2004.
[12] S. Tsukamoto, H. Hoshino, and T. Tamura, “Study on indoor
activity monitoring by using electric field sensor,” in
International Conference of the International Society for
Gerontechnology, Pisa, Tuscany, Italy, June 4-7 2008.
[13] L. Lio, D. Fox, H. Kautz, “Extracting places and activities
from GPS traces using hierarchical conditional random
fields,” Int. J. Robotics Research, 26(1): 119-134, 2007.
[14] A. Fleury, M. Vacher, N. Noury, “ SVM-Based Multi-Modal
Classification of Activities of Daily Living in Health Smart
Homes : Sensors, Algorithms and First Experimental
Results,” IEEE Transactions on Information Technology in
Biomedicine, Vol. 14(2), pp. 274-283, March 2010.
[15] A. Schmidt, Ubiquitous Computing - Computing in Context.
Ph.D. Dissertation, Lancaster University, 2002.

View publication stats

You might also like