Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Aiguo Wang1 , Xianhong Wu2 , Liang Zhao3 , Haibao Chen3 , and Shenghui Zhao This email address is being protected from spambots. You need JavaScript enabled to view it.3

1School of Electronic Information Engineering, Foshan University, Foshan, China
2Shenzhen Zhiwei Sci-Tech Innovation Co., Ltd., Shenzhen, China
3School of Computer and Information Engineering, Chuzhou University, Chuzhou, China


 

Received: January 21, 2021
Accepted: February 2, 2021
Publication Date: August 1, 2021

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202108_24(4).0016  


ABSTRACT


Human physical activities play an essential role in many aspects of daily living and are inherently associated with the functional status and wellness of an individual, therefore, automatically and accurately detecting human activities with pervasive computing techniques has practical implications. Although existing accelerometer-based activity recognition models perform well in a variety of applications, most of them typically work by concatenating features of different domains and may fail to capture the multi-view relationships, resulting in degraded performance. To this end, we present a multi-view aggregation model to analyze the accelerometer data for human activity recognition. Specifically, we extract the time-domain and frequency-domain features from raw time-series sensor readings to obtain the multi-view data representations. Afterwards, we train a first-level model for each view and then unify the models with stacking ensemble into a meta-model. Finally, comparative experiments on three public datasets are conducted against other three activity recognition models. Results indicate the superiority of the proposed model over its competitors in terms of four evaluation metrics across different scenarios.


Keywords: Wearable computing; Activity recognition, Multiview aggregation


REFERENCES


  1.  [1] Jing Yu, Hang Li, Shou Lin Yin, Qingwu Shi, and Shahid Karim. Dynamic gesture recognition based on deep learning in human-to-computer interfaces. Journal of Applied Science and Engineering, 23(1):31–38, 2020. ISSN 15606686.
  2. [2] Aiguo Wang, Shenghui Zhao, Chundi Zheng, Jing Yang, Guilin Chen, and Chih Yung Chang. Activities of Daily Living Recognition with Binary Environment Sensors Using Deep Learning: A Comparative Study. IEEE Sensors Journal, 21(4):5423–5433, 2021. ISSN 15581748.
  3. [3] Andreas Bulling, Ulf Blanke, and Bernt Schiele. A tutorial on human activity recognition using body-worn inertial sensors. ACM Computing Surveys, 46(3), jan 2014. ISSN 03600300.
  4. [4] Aiguo Wang, Shenghui Zhao, Chundi Zheng, Huihui Chen, Li Liu, and Guilin Chen. HierHAR: SensorBased Data-Driven Hierarchical Human Activity Recognition. IEEE Sensors Journal, 21(3):3353–3365, 2021. ISSN 15581748.
  5. [5] Ling Bao and Stephen S. Intille. Activity recognition from user-annotated acceleration data. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3001:1–17, 2004. ISSN 16113349.
  6. [6] Nishkam Ravi, Nikhil Dandekar, Preetham Mysore, and Michael Littman. Activity recognition from accelerometer data. In 17th Conference on Innovative Applications of Artificial Intelligence, pages 1541–1546, 2005.
  7. [7] Kilian Förster, Daniel Roggen, and Gerhard Tröster. Unsupervised classifier self-calibration through repeated context occurences: Is there robustness against sensor displacement to gain? In Proceedings - International Symposium on Wearable Computers, ISWC, pages 77–84, 2009. ISBN 9780769537795.
  8. [8] Yonggang Lu, Ye Wei, Li Liu, Jun Zhong, Letian Sun, and Ye Liu. Towards unsupervised physical activity recognition using smartphone accelerometers. Multimedia Tools and Applications, 76(8):10701–10719, 2017. ISSN 15737721.
  9. [9] Jennifer R. Kwapisz, Gary M. Weiss, and Samuel A. Moore. Activity recognition using cell phone accelerometers. ACM SIGKDD Explorations Newsletter, 12(2):74–82, mar 2011. ISSN 1931-0145.
  10. [10] Stefan Dernbach, Barnan Das, Narayanan C. Krishnan, Brian L. Thomas, and Diane J. Cook. Simple and complex activity recognition through smart phones. In Proceedings - 8th International Conference on Intelligent Environments, IE 2012, pages 214–221, 2012. ISBN 9780769547411.
  11. [11] Jing Zhao, Xijiong Xie, Xin Xu, and Shiliang Sun. Multiview learning overview: Recent progress and new challenges. Information Fusion, 38:43–54, 2017. ISSN 15662535.
  12. [12] Aiguo Wang, Guilin Chen, Jing Yang, Shenghui Zhao, and Chih Yung Chang. A Comparative Study on Human Activity Recognition Using Inertial Sensors in a Smartphone. IEEE Sensors Journal, 16(11):4566–4578, 2016. ISSN 1530437X.
  13. [13] Zimin Xu, Guoli Wang, and Xuemei Guo. Sensor-based activity recognition of solitary elderly via stigmergy and two-layer framework. Engineering Applications of Artificial Intelligence, 95, 2020. ISSN 09521976.
  14. [14] Lisha Hu, Yiqiang Chen, Jindong Wang, Chunyu Hu, and Xinlong Jiang. OKRELM: online kernelized and regularized extreme learning machine for wearable-based activity recognition. International Journal of Machine Learning and Cybernetics, 9(9):1577–1590, sep 2018. ISSN 1868808X.
  15. [15] Haodong Guo, Ling Chen, Yanbin Shen, and Gencai Chen. Activity recognition exploiting classifier level fusion of acceleration and physiological signals. In UbiComp 2014 - Adjunct Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pages 63–66. Association for Computing Machinery, Inc, 2014. ISBN 9781450330473.
  16. [16] Aiguo Wang, Ning An, Guilin Chen, Lian Li, and Gil Alterovitz. Accelerating wrapper-based feature selection with K-nearest-neighbor. Knowledge-Based Systems, 83(1): 81–91, 2015. ISSN 09507051.
  17. [17] David Stromback, Sangxia Huang, and Valentin Radu. Mm-fit Multimodal deep learning for automatic exercise logging across sensing devices. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 4 (4), dec 2020. ISSN 24749567.