Mengjiao Zhang and Lei ZhangThis email address is being protected from spambots. You need JavaScript enabled to view it.
School of Physical Education, Harbin University. Harbin, Heilongjiang 150086, China
Received: December 15, 2024 Accepted: February 2, 2025 Publication Date: March 16, 2025
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
Aiming at the problem that the existing methods of aerobics action recognition cannot dig deep information such as movement and body features, a novel aerobics action recognition based on graph neural network and D-S evidence reasoning is proposed. This algorithm uses the aerobics movement history and body information, combines with the advantages of multi-feature fusion and graph neural network, through modeling the relationship between body and movement, to obtain the influence degree of body information on different types of aerobics movements, as well as the long-term and short-term nature of historical movements. By combining the least square support vector machine and D-S evidence theory, the motion acceleration data and image coordinate data are fused to input the fused motion data. In the experiment, the performance of this new method and other advanced methods in aerobics movement recognition task is compared. The results show that the new method has obvious improvement in accuracy, precision, recall and F-value, which proves the effectiveness of the proposed algorithm.
Keywords: aerobics action recognition, graph neural network, D-S evidence reasoning, least square support vector machine, multi-feature fusion
[1] T.Shen, S. Liu, X. Yue, Z. Wang, H. Liu, R. Yin, C. Liu, and C. Shen, (2023) “High-performance fibrous strain sensor with synergistic sensing layer for human motion recognition and robot control" Advanced Composites andHybridMaterials 6(4): 127. DOI: 10.1007/s42114-023-00701-9.
[2] S. Biswas, C. O. Ayna, S. Z. Gurbuz, and A. C. Gur buz, (2023) “Cv-sincnet: Learning complex sinc filters from raw radar data for computationally efficient human motion recognition" IEEE Transactions on Radar Sys tems 1: 493–504. DOI: 10.1109/TRS.2023.3310894.
[3] S. Yin, H. Li, A. A. Laghari, T. R. Gadekallu, G. A. Sampedro, and A. Almadhor, (2024) “An anomaly detection model based on deep auto-encoder and capsule graph convolution via sparrow search algorithm in 6G internet-of-everything" IEEE Internet of Things Jour nal 11(18): 29402–29411. DOI: 10.1109/JIOT.2024.3353337.
[4] L. Jiang, M. Wu, L. Che, X. Xu, Y. Mu, and Y. Wu, (2023) “Continuous human motion recognition based on FMCWradar and transformer" Journal of Sensors 2023(1): 2951812. DOI: 10.1155/2023/2951812.
[5] J. Wensel, H. Ullah, and A. Munir, (2023) “Vit-ret: Vi sion and recurrent transformer neural networks for human activity recognition in videos" IEEE Access 11: 72227 72249. DOI: 10.1109/ACCESS.2023.3293813.
[6] B. Ren, M. Liu, R. Ding, and H. Liu, (2024) “A survey on 3d skeleton-based action recognition using learning method" Cyborg and Bionic Systems 5: 0100. DOI: 10.34133/cbsystems.0100.
[7] W.Yang, J. Zhang, J. Cai, and Z. Xu, (2023) “Hybrid Net: Integrating GCN and CNN for skeleton-based action recognition" Applied Intelligence 53(1): 574–585. DOI: 10.1007/s10489-022-03436-0.
[8] B. Yu, H. Yin, and Z. Zhu, (2017) “Spatio-temporal graph convolutional networks: A deep learning framework for traffic forecasting" arXiv preprint arXiv:1709.04875: DOI: 10.48550/arXiv.1709.04875.
[9] K. Chen and W. Tao, (2017) “Once for all: a two-flow convolutional neural network for visual tracking" IEEE Transactions on Circuits and Systems for Video Technology 28(12): 3377–3386. DOI: 10.1109/TCSVT.2017.2757061.
[10] X. Wang, M. Zhu, D. Bo, P. Cui, C. Shi, and J. Pei. “Am-gcn: Adaptive multi-channel graph convolu tional networks”. In: Proceedings of the 26th ACM SIGKDDInternational conference on knowledge discov ery & data mining. 2020, 1243–1253. DOI: 10.1145/3394486.3403177.
[11] C.Li,S.Li,Y.Gao,L.Guo,andW.Li,(2023)“Improved shift graph convolutional network for action recognition with skeleton" IEEE Signal Processing Letters 30: 438 442. DOI: 10.1109/LSP.2023.3267975.
[12] K. Cheng, Y. Zhang, X. He, W. Chen, J. Cheng, and H. Lu. “Skeleton-based action recognition with shift graph convolutional network”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, 183–192. DOI: 10.1109/CVPR42600.2020.00026.
[13] S. Hou, W. Li, T. Liu, S. Zhou, J. Guan, R. Qin, and Z. Wang, (2022) “MIMO: A unified spatio-temporal model for multi-scale sea surface temperature prediction" Re mote Sensing 14(10): 2371. DOI: 10.3390/rs14102371.
[14] J. Liu, X. Wang, C. Wang, Y. Gao, and M. Liu, (2023) “Temporal decoupling graph convolutional network for skeleton-based gesture recognition" IEEE Transactions onMultimedia26:811–823. DOI: 10.1109/TMM.2023.3271811.
[15] C. Li, S. Li, Y. Gao, L. Zhou, and W. Li, (2024) “Static graph convolution with learned temporal and channel wise graph topology generation for skeleton-based action recognition" Computer Vision and Image Under standing 244: 104012. DOI: 10.1016/j.cviu.2024.104012.
[16] A. Jisi, S. Yin, et al., (2021) “A new feature fusion net work for student behavior recognition in education" Jour nal of Applied Science and Engineering 24(2): 133–140. DOI: 10.6180/jase.202104_24(2).0002.
[17] G. Fortino, S. Galzarano, R. Gravina, and W. Li, (2015) “A framework for collaborative computing and multi-sensor data fusion in body sensor networks" Infor mation Fusion 22: 50–70. DOI: 10.1016/j.inffus.2014.03.005.
[18] S.Yin,L.Wang,M.Shafiq,L.Teng,A.A.Laghari,and M. F. Khan, (2023) “G2Grad-CAMRL: an object detec tion and interpretation model based on gradient-weighted class activation mapping and reinforcement learning in remote sensing images" IEEE Journal of Selected Top ics in Applied Earth Observations and Remote Sensing 16: 3583–3598. DOI: 10.1109/JSTARS.2023.3241405.
[19] M.Latah, (2017) “Human action recognition using sup port vector machines and 3D convolutional neural net works" Int. J. Adv. Intell. Informatics 3(1): 47–55. DOI: 10.26555/ijain.v3i1.89.
[20] Z. Wang, Q. She, and A. Smolic. “Action-net: Multi path excitation for action recognition”. In: Proceedings of the IEEE/CVF conference on computer vision and pat tern recognition. 2021, 13214–13223. DOI: 10.1109/CVPR46437.2021.01301.
[21] Y.Htet, T. T. Zin, H. Tamura, K. Kondo, and E. Chosa, (2024) “Unobtrusive Elderly Action Recognition with Transitions Using CNN-RNN" Journal of Signal Pro cessing 28(6): 315–319. DOI: 10.2299/jsp.28.315.
[22] E. Ko¸ sar and B. Barshan, (2023) “A new CNN-LSTM architecture for activity recognition employing wearable motion sensor data: Enabling diverse feature extraction" Engineering Applications of Artificial Intelligence 124: 106529. DOI: 10.1016/j.engappai.2023.106529.
[23] H. Zhang, K. Yang, G. Cao, and C. Xia, (2023) “ViT LLMR:VisionTransformer-based lower limb motion recog nition from fusion signals of MMG and IMU" Biomedi cal Signal Processing and Control 82: 104508. DOI: 10.1016/j.bspc.2022.104508.
[24] W. Zhu, X. Ma, Z. Liu, L. Liu, W. Wu, and Y. Wang. “Motionbert: A unified perspective on learning hu man motion representations”. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023, 15085–15099. DOI: 10.1109/ICCV51070.2023.01385.
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.