Keywords
Wearable, activity monitoring, mobility, change of state, smartphone, accelerometer, video
Wearable, activity monitoring, mobility, change of state, smartphone, accelerometer, video
Understanding how people move within their daily lives is important for healthcare decision-making. Typically, a person’s mobility status is self-reported in the clinic, thereby introducing error and increasing the potential for biased information. Functional scales can be administered to help gain an understanding of a person’s mobility status1–4. However, these tests do not measure how a person moves when leaving the clinic. A quantitative method of characterizing mobility in the community would provide valid and useful information for healthcare providers and researchers.
Wearable systems have been developed to evaluate mobility in any environment or location. These portable systems are worn on the body and collect mobility information within the home and community. Examples include accelerometer-based systems5–8 portable electromyographs (EMG)9, and foot pressure devices10. Some wearable systems can be used for long-term and low effort monitoring.
Smartphones provide an ideal interface for mobility assessment in the community since they are small, light, easily worn, and easy to use for most consumers. These phones are multitasking, computing platforms that can incorporate accelerometers, GPS, video camera, light sensors, temperature sensors, gyroscopes, and magnetometers11,12.
As shown in Table 1, cell phones have been used to recognize multiple activities by analyzing the phone’s accelerometer or external sensor data. However, most systems only used the phone as a wireless data transfer device, not as a wearable computing platform. These systems used an external computer for analysis of the data.
Author | Location | Phone based | Features | Freq (Hz) | Accuracy | Activities | Method |
---|---|---|---|---|---|---|---|
He7 | N | Discrete cosine transform, Principal Component Analysis | 100 | 97% | Still, walk, run, jump | Support vector machine | |
Wang6 | Waist | N | Linear sum, duration, mean, median, STD, zero-crossing count, correlation, spectrum amplitude: mean ratio, empirical mode decomposition | 50 | 91% | Walk up and down slope | Gaussian mixture model |
Hache5 | Waist | N | STD-Y, magnitude area, skewness Y, inclination angle | 50 | 88% | Stand, sit, lie, walk, stairs, ramp, elevator | Hierarchical decision tree |
Khan8 | Chest | N | Autoregressive coefficients, signal magnitude area, tilt angle to 3D feature plot | 20 | 98% | Sit, stand, lie, walk, stairs, run | Artificial neural net, hierarchical LDA Features |
Kwapisz13 | Front pant | Y | Average, STD, average absolute difference, average resultant acceleration, signal magnitude area, time between peaks, and binned distribution | 20 | 50% stairs, others 91 ~ 95% | Walk, jog, walk on stairs, sit, and stand | Multilayer perceptron |
Shumei14 | Waist | Y | Change, max, min, and velocity | 1 | 81.9% for motion, 83.7% for motionless | Walk, posture transition, gentle motion, stand, sit, lie | Hierarchical classification with multiclass SVM |
Ganti15 | Waist | Y | Energy expenditure, body angle, 3D acceleration skewness, acceleration entropy | 7 | Eat, meeting (14%); drive, watch TV, aerobic (82%); desk work (50%); hygiene, cook (99%) | Sit, stand, aerobic, brush teeth, work, lie | Hidden Markov Model |
Wu19 | Waist | Y | STD-X, STD-Y, STD-Z, mean-Y, sum of ranges, range-Y, signal magnitude area’s sum of ranges, difference of sum of ranges, range-XZ, sum of GPS speed, and GPS speed. | 8 | 100% for static activities, 98% for walking, 66 ~ 77% for stairs, 33% for ramp. | Stand, sit, lie, walk, going stairs, going ramp, elevator, an, car ride | Hierarchical decision tree |
Wearable video-sensor systems have also been developed to log digital memories. Video, GPS, electrocardiogram, and acceleration were used to record location and context of a person’s daily life16–18. These Lifelogs had several limitations, including information retrieval, synchronization, light quality, low visual resolution, artifacts, and automatic annotation.
A previous project showed that sensors located in a “smart-holster” could be combined with a BlackBerry Smartphone to provide image-assisted mobility assessment5. An external accelerometer, temperature sensor, humidity sensor, light sensor, and Bluetooth were integrated into the Smartphone holster. Software was written for the BlackBerry 9000, which had an embedded camera and GPS. A hierarchical decision tree combined with a double threshold algorithm classified signals to recognize changes-of-state (CoS) and activities. The BlackBerry then automatically took a digital picture at each CoS. This preliminary work confirmed that a wearable mobility monitoring system (WMMS) could use acceleration and pictures to identify walking movements, standing, sitting, and lying down, and additionally provide context for these activities.
Subsequent research19 used the BlackBerry 9550 Smartphone, with an integrated accelerometer, and revised algorithms to detect changes of state (CoS) and classify activities of daily living. The low sampling rate of the BlackBerry 9550 required some of the algorithms for the first WMMS to be revised. Having all sensors and computing power within the Smartphone provides a broadly accessible platform for wearable activity monitoring. A single subject case study resulted in an average sensitivity of 89.7% and the specificity of 99.5% for walking-related activities, sensitivity of 72.2% for stair navigation, and sensitivity of 33.3% for ramp recognition. Ramp results were poorer since accelerations for ramp gait were similar for walking gait.
Since it was demonstrated that new BlackBerry Smartphones can identify CoS in real-time19, and Smartphone video capture is available on these devices, a preliminary evaluation of the usefulness of wearable video for improving activity classification and context identification is needed. The present study builds on this prior work and presents a proof-of-concept evaluation of a new BlackBerry-based WMMS that uses internal sensors and cell phone video to identify a range of mobility and daily living activities by using the CoS to trigger the acquisition of short video clips. A proof-of-concept evaluation is a necessary step before a large scale evaluation with people with disabilities. This study is the first to integrate sensor and (non-automated) video analysis for wearable mobility monitoring.
The prototype WMMS was developed for BlackBerry OS 5.0 on the Storm2 9550 Smartphone. The WMMS used the BlackBerry’s integrated accelerometer (3-axis, ± 2g), GPS, camera, and SD memory card for mobility data collection. The 3.15 MP camera had a maximum video resolution of 480x352 pixels and a video frame rate of 30 frames per second20. During video capture, no accelerometer or GPS data were available. With video control active, the accelerometer sampling rate was approximately 8 Hz21. For the WMMS, GPS data were sampled at 1 Hz, but GPS data were not used in the algorithm presented in this paper. The WMMS was worn in a holster on the right, front pelvis.
The new WMMS software developed in22, sampled time, acceleration, and GPS location and then saved the raw data to a 16Gb SD-card. Accelerations were processed to calibrate the axis orientation, using the gravity rotation method4. The processed acceleration was input into a feature extraction algorithm19 for analysis within 1-second data windows. Following feature extraction, the features were analyzed in a decision tree to determine if a CoS had occurred. If a CoS was identified, a three-second video clip was captured. A second decision tree was used to categorize the activity19. Time, features, and activity classification were saved to an output file on the SD card.
Acceleration features were identified that were sensitive to changes in mobility status19. These included mean Y-axis acceleration, standard deviation (STD) in X,Y,Z axes (1), Y-axis range (Range_Y) (2), Sum of Ranges (SR) (3), Signal Magnitude Area (SMA) of SR (4), difference of Sum of Ranges (DiffSR) (5), and range of X and Z (Rxz) (6). The inclination angle (7) could distinguish sitting, lying, and standing for the classification of static activities. While the GPS provides useful location information, GPS was not required in this study because accelerometer-based recognition of vehicle riding showed good results19.
(1)
Range_Y = (MaxY – MinY) (2)
SR = (Range_X + Range_Y + Range_Z) (3)
(4)
Diffsr = (SR2 – SR1) (5)
Rxz = (Range_X + Range_Z) (6)
Inclination Angle = arctan (°) (7)
In equations (1) to (7), my is the mean Y-acceleration, yi is the individual acceleration value, N is the samples per data window, SR2 is the sum of ranges in current window, SR1 is the sum of ranges in previous window, and finally mz is the mean Z-acceleration.
The CoS algorithm described in19 used pre-set thresholds for feature analysis. Each feature was independently analyzed using single or double-thresholds and scored as true or false. These Boolean values were combined to recognize the activities reported in19 (static state, taking an elevator, walking-related movements) and also to identify changes of state for small movements during activities of daily living (ADL) (meals, hygienic activities, working in the kitchen). CoS classification ran in real-time to enable accurate Smartphone video recording.
The decision tree for activity classification, described in19, used the same features and thresholds as the CoS algorithm to classify eight activities: sitting, standing, lying, riding an elevator, small stand-movements, small sit-movements, small lie-movements, and walking. The small movement categories included ADL and movements while sitting and standing. Following data collection, the video clips were manually reviewed by a human operator to help classify activities. Clips were played back on a BlackBerry 9700 phone. Video and audio were available for qualitative assessment of the current activity.
A convenience sample of five able-bodied subjects (four males and one female, age: 35.2 ± 8.72 years, height: 174.58 ± 10.75 cm, weight: 66.46 ± 9.7 kg) were recruited from the Ottawa Hospital Rehabilitation Center (TOHRC, Ottawa, Canada) for the proof-of-concept evaluation. A form that included informed consent, information sheet, and media agreement was obtained for each subject. Ethical approval for the study was obtained from the Office of Research Ethics and Integrity, University of Ottawa, File Number: A08-12-01. Subjects who had injuries or gait deficits were excluded.
Data collection took place within the TOHRC (hallways, elevator, one bedroom apartment, stairs, Rehabilitation Technology Laboratory) and outside on a paved pathway. Participants wore the WMMS on their right-front waist, in a regular BlackBerry holster attached to a belt, with the camera pointing forward. No additional instructions were given for WMMS positioning. The participants were asked to follow a predetermined path and perform a series of mobility tasks, including, standing, walking, sitting, riding an elevator, brushing teeth, combing hair, washing hands, drying hands, setting dishes, filling the kettle with water, toasting bread, a simulated meal at a dining table, washing dishes, walking on stairs, lying on a bed, walking on a ramp, and walking outdoors. Each activity in the sequence took approximately 10 to 20 seconds to complete. Each trial had 41 changes of state. Three trials were captured on the Smartphone and on a digital video camcorder.
Activity timing was obtained from the camcorder recording for comparison with the WMMS output. Start and end points of each trial were identified by shaking the Smartphone for 2 seconds. The camcorder also provided contextual information for analysis.
Data were imported from the Smartphone SD card into Microsoft Excel for statistical analysis. The Smartphone video clips were reviewed by two independent evaluators to qualitatively classify activities and assess video quality. The evaluators had access to the WMMS activity classification results during video classification.
The results were analyzed in terms of CoS and activity classification. For CoS identification at the transition point between activities (Table 2), sensitivities for standing, sitting, lying, and taking an elevator were between 97% and 100%. Walking-related CoS, such as stairs and ramps, had 67% to 73% sensitivity. Sensitivity5 related to CoS between walking and small movements, such as brushing teeth, were between 40% and 93%. The CoS results for daily living activities were poorer (below 27%) since the continuous series of small movements produced similar acceleration features.
Table 3 shows the specificity5 results for activity classification using the CoS algorithm19. These classifications are compared between data windows for CoS identification. The number of false positives was less than 12% for all measures, with half the measures reporting less than 5% false positives. Walking produced the most false positives, identifying a walking-CoS when no CoS occurred, for 324 out of 2700 cases (12%). Large standard deviations were found for some measures due to timing differences (i.e., people who walked slower took more time and had more data points for analysis, leading to more true negatives).
Better activity classification results were achieved when using both acceleration features and video clips for all activities except sitting, as compared to using the accelerometer only (Table 4). Acceleration-only analysis with the activity classification algorithm19 had a greater sensitivity than acceleration-and-video for sitting because one CoS was missed in one trial, resulting in no associated video data. Using the accelerometer only, none of the ADLs were identified; however, no false positives were reported, resulting in a high specificity.
Since accelerometer data were unavailable during video recording, no acceleration could be recorded for a minimum of 2.9 seconds after a CoS was identified. For example, if a person sits in a chair, stands up, and sits down again within 4 seconds (3 seconds of video capture and 1-second data window), the CoS will not be detected or identified. However, digital video would be available during this period for activity classification, enabling the evaluator to see the additional movements during post-processing. The average accelerometer sampling frequency was 7.88 ± 1.39 Hz. Video analyses became difficult in some circumstances, in particular when video images were dark.
The BlackBerry 9550 included a light sensor, but the sensor output was unavailable with Java API 5.0. Light intensity level could be beneficial for recognizing indoor and outdoor environments.
As shown in Figure 1, the extracted features were used to recognize small movements, identified using a combination of SR and STD_Y. Discrete small movements were easily distinguished; however, a CoS could be missed for a continuous series of small movements that has similar accelerometer features. For example, brushing teeth and then combing hair.
Sum of Ranges to distinguish small movements: brushing teeth (BT), combing hair (CH), washing hands (WH), drying hands (DH), moving dishes (MD), moving a kettle (MK), toasting bread (TB), preparing a meal (PM), and washing dishes (WD).
The sum of ranges was more sensitive than the STD_Y for distinguishing mobility states (Figure 2). Further, the SMA-SR curves were smoother than the sum of ranges curves. Smooth SMA-SR curves were better at defining classification thresholds for climbing stairs and walking.
Top: Sum of ranges (SR) and STD_Y to distinguish walking and static states (BT=brushing teeth, CH=combing hair, WH=washing hands, and DH=drying hands).
Bottom: SMA-SR to identify climbing stairs and walking.
Recording a video was a valuable medium for evaluating mobility activities since audio information was available and multiple-images could be used in the analysis (Table 5). The sense of motion also provided useful information for categorizing the activity and context.
Activity recognition for static and walking states was confirmed by video-clip evaluation. Occasionally, the video showed a moving-hand during walking, as the arm swung during locomotion. Since the phone was located on the right waist, the video usually showed more of the right side than the left. For ramp navigation, the Smartphone video was not able to show the ramp, however, the video did allow the evaluator to determine if the person was ascending or descending. Similarly, the video was unable to show the stairs when descending, but the downward movement and sound of the footfalls on the stairs allowed the evaluator to categorize the stair descent activity.
The elevator CoS was usually triggered by a transition from walking to standing, rather than from elevator movement. The activity was classified using the video by seeing the elevator door or the inside of the elevator in the video-clip. The elevator inside was unclear and dark, but the elevator sound was clearly defined. Further, the bathroom, kitchen, and dining room areas were clearly identified from video analysis.
Small movements, such as brushing teeth, combing hair and moving plates could not be identified because these activities were out of the Smartphone video field. Some clips could clearly identify moving a kettle, toasting bread, meal preparation, and washing dishes (Figure 3). Furthermore, drying hands and meal preparation needed continuous video images to identify the activities (i.e., single images would not be useful). Moreover, the video-based activity classification was hampered for people with shorter lower bodies when a tabletop or kitchen counter obstructed the phone. For example, moving a kettle and toasting bread were not visible in the video field for these shorter participants.
Images from BlackBerry video for small movements: a) kitchen cabinet and oven, b) bathroom sink, c) dining table and meals, d) moving a kettle, e) toasting bread, f) meal preparation, g) washing dishes, h) towel and towel rack, i) drying hands.
This study demonstrated that BlackBerry accelerometer signal analysis and cell phone video assessment can be combined to identify many mobility activities and the context of these activities. Since a readily available Smartphone was worn in a typical manner, at the waist in a holster, and no external sensors or hardware was required, the WMMS could be easily implemented in the community.
The ability to appropriately identity a CoS is critical for a WMMS that uses video, with accurate and real-time CoS identification needed to trigger the acquisition of video clips at the appropriate time to enable post-processing. The WMMS successfully recognized CoS for both static and walking activities. By taking the initial activity classification from the sensor data and refining the classification decision using the BlackBerry video, superior activity recognition results were obtained.
Static activity (sitting, standing, lying) classification accuracy improved from outcomes in the literature (below 94.6%,5,13,23) to 97.3%. The results would have been 100% except one CoS was missed during a walk-to-sit CoS of one of the trials. More thorough biomechanical analysis is required to identify the movement pattern that created this outlying error. Standing and lying down were recognized with 100% accuracy.
The WMMS produced better sensitivity results for walking than the previous Smartphone-smart-holster system5. While various studies in the literature had better walking accuracy (above 96%7,23), these studies only distinguished differences between very distinct mobility states, such as running, jumping, and no-movement. Accuracy of these systems would likely reduce if they were categorizing similar activities, such as running on level ground and running up an incline.
Stair navigation identification improved from 60% in the literature5,13 to 75% with the new WMMS. However, the result did not reach the 95% target. Acceleration-based categorization remains difficult for stairs since the signal patterns between walking on level ground and stairs are similar, and the stair slope may not greatly change accelerations at the waist for able-bodied people.
CoS identification for ramp walking was enhanced from 43.3% for the previous system5 to 66% because the new WMMS used a specific ramp judgment within the decision tree. Ramp classification also improved from 16.6%5 to 43.5%. Accelerometer signals from able-bodied people do not always change when moving from level ground to an incline, however, accelerometer signals from people with mobility disabilities may be sufficiently distinct to enable consistent ramp CoS triggering. During ramp descent, classification errors may occur if a walking CoS is triggered before the person finishes descent and the waist-mounted video does not show the ramp bottom. An altitude sensor may help detect changes in body vertical position during ramp and stair navigation, thereby providing more accurate CoS triggering and sensor-based classification.
Specificity is also important for a WMMS that incorporates video since identifying a CoS when no change actually occurs (i.e. a false positive) results in inappropriate video capture that affects storage capacity, battery life, and increases the workload for video post-processing. Most prior research did not report specificity or false positive results13–15. For the two studies that reported false positives, the maximum false positive rate was above 12% for walking-related activities5,24. Improved threshold calibration methods that are specific to the individual may help to decrease false positives and false negatives.
The BlackBerry device was able to process the features and algorithms in real-time and save outcome data for each 1-second window. Most mobility research used cell phones to collect raw data and then analyzed features and algorithms offline13–15,25. For the new WMMS, threshold settings, decision trees, feature extraction and timing were executed on the Smartphone in real time.
The combination of accelerometer and video camera was superior to using the accelerometer data alone to provide mobility information for activities of daily living. Bathroom, kitchen, and dining room activities were recognized by videos in 63% to 94% of the cases. Although the combination of accelerometer and video could not consistently identify small ADL movements, this method had better accuracy than the accelerometer only, which could not categorize any of these ADLs. Further, CoS for small movements could not be clearly identified. A higher accelerometer sampling rate may allow for more complex signal analysis that could improve small movement CoS identification. Additional features, such as signal magnitude area, skewness5, energy, correlation26, and kurtosis27, might help to recognize the small movement CoS, but these features need greater sampling rates and/or greater accelerometer range.
The video was better than still images for refining activity classification and recognizing context, such as flooring type, bathroom/kitchen, and outdoors. From previous research5, still images had limitations in dark areas such as elevators, but video clips with audio were helpful for identifying an elevator and other states. Since videos had continuous images to distinguish upward and downward movement, these clips were useful for stairs and ramp navigation. Although video cameras could potentially recognize small ADL movements, the waist-mounted Smartphone location limited the camera to activities occurring about waist height.
While BlackBerry video proved useful, the current implementation requires human time to review and classify the activities. Manual video assessment could be used for research and specific clinical applications; however, automated video analysis would make the system more efficient and promote widespread use of video data for activity classification. Further research into automated video analysis is required to achieve this goal.
Building on previous work that demonstrated the BlackBerry’s ability to identify changes of state in real-time, a new WMMS was presented that uses only the Smartphone’s accelerometer and video for unsupervised and ubiquitous mobility analysis for research and healthcare applications. This study was the first to integrate sensor and video analysis for wearable mobility monitoring. This sampling rate for the phone sensors was lower than used in previous studies, demonstrating that a standalone WMMS can be implemented even for older-hardware phones. This standalone WMMS was designed for independent community ambulation and has minimal space requirements and setup time. Furthermore, the context of a person’s mobility activities can be identified, including the environment in which mobility takes place.
While monitoring mobility, the new system saves raw sensor data and features. The raw data could be analyzed by other researchers or used in post-processing to improve activity classification. Novel multi-feature algorithms were developed to recognize activities under lower accelerometer sampling rates and range. By combining and weighting sum, range, and covariance statistics, the WMMS was able to recognize standing, sitting, lying, riding an elevator, walking on level ground, ramps, stairs, and ADL (washing hands, drying hands, setting dishes, moving a kettle, toasting bread, preparing a meal, and washing dishes). Static activities, walking on level ground, walking on stairs, walking on a ramp, and riding an elevator had higher sensitivities than previous studies, but the overall CoS identification and activity classification performance could be improved by further research.
Adding additional sensors, increasing the accelerometer sampling rate/sensitivity, and adding new user-specific threshold calibration methods can be considered in future work. The classification of other small movement ADL activities also requires further research to increase sensitivity and specificity. While evaluation with able-bodied participants was warranted for this proof-of-concept study, further evaluation research with various patient populations is required before this WMMS can be accepted for use by people with mobility disabilities.
Informed written consent to publish the results of this study was obtained from each participant.
F1000Research: Dataset 1. Data for combination of smartphone sensors and video for a wearable mobility monitoring system, 10.5256/f1000research.4790.d3253828
EL conceived the WMMS. HW, EL and NB designed and developed the WMMS and its evaluation protocol. HW wrote the WMMS code, performed the data collection and analysis. HW, EL and NB and wrote the manuscript.
This project was jointly funded by the Natural Sciences and Engineering Research Council of Canada (NSERC, #CRDPJ 408109) and Research in Motion.
The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
The authors would like to thank Shawn Millar for assistance with data collection and Whitney Montgomery for statistical review. This project was funded by the Ontario Centers of Excellence, Natural Sciences and Engineering Research Council of Canada (NSERC), and BlackBerry. The study sponsors had no involvement in the study design, in the collection, analysis and interpretation of data, in the writing of the manuscript, and in the decision to submit the manuscript for publication.
Views | Downloads | |
---|---|---|
F1000Research | - | - |
PubMed Central
Data from PMC are received and updated monthly.
|
- | - |
Competing Interests: No competing interests were disclosed.
Competing Interests: No competing interests were disclosed.
Alongside their report, reviewers assign a status to the article:
Invited Reviewers | ||
---|---|---|
1 | 2 | |
Version 2 (revision) 16 Mar 15 |
read | |
Version 1 25 Jul 14 |
read | read |
Click here to access the data.
Spreadsheet data files may not format correctly if your computer is using different default delimiters (symbols used to separate values into separate cells) - a spreadsheet created in one region is sometimes misinterpreted by computers in other regions. You can change the regional settings on your computer so that the spreadsheet can be interpreted correctly.
Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:
Sign up for content alerts and receive a weekly or monthly email with all newly published articles
Already registered? Sign in
The email address should be the one you originally registered with F1000.
You registered with F1000 via Google, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Google account password, please click here.
You registered with F1000 via Facebook, so we cannot reset your password.
To sign in, please click here.
If you still need help with your Facebook account password, please click here.
If your email address is registered with us, we will email you instructions to reset your password.
If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.
Comments on this article Comments (0)