Journal of Applied Science and Engineering

Published by Tamkang University Press

1.30

Impact Factor

2.10

CiteScore

Po KongThis email address is being protected from spambots. You need JavaScript enabled to view it.

School of Electronics and Electrical Engineering, Zhengzhou University of Science and Technology, Zhengzhou 450064 China


 

 

Received: January 7, 2025
Accepted: February 27, 2025
Publication Date: March 16, 2025

 Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.


Download Citation: ||https://doi.org/10.6180/jase.202511_28(11).0014  


In view of the low accuracy of facial expression recognition in natural state, which is easily affected by noise and other factors, this paper proposes a lightweight facial expression recognition method based on deep convolutional asymmetric residual attention network model. In this paper, Ghost module and depth-separable convolution are introduced to replace 1×1 convolution and 3×3 convolution in Bottleneck, which retains more original feature information and improve the feature extraction ability of backbone branches. A parallel deep convolutional residual structure is designed in the shallow network to enhance the model’s ability to represent the local details of facial expressions and integrate them with the global features. A spatial grouping enhanced attention mechanism is established in the deep network to improve the stability of expression feature distribution and enhance the ability of the model to discriminate subtle changes in expression. In order to avoid over-fitting of the model, the output structure of the backbone network is improved without greatly increasing the computational complexity. The Mish activation function is used to replace the ReLU activation function in Bottleneck, which improves the accuracy of expression recognition. The expression recognition accuracy of the proposed method on the published seven-classification dataset RAF-DB, AffectNet-7 and eight-classification dataset AffectNet-8 reaches 88.44%, 63.20% and 60.23%, respectively. The experimental results show that the proposed method can improve the expression recognition accuracy while reducing network parameters. It is proved that the method is effective and has a certain application prospect.


Keywords: Facial expression recognition; Ghost module; depth-separable convolution; attention mechanism


  1. [1] M.Sajjad, F. U. M. Ullah, M. Ullah, G. Christodoulou, F. A. Cheikh, M. Hijji, K. Muhammad, and J. J. Ro drigues, (2023) “A comprehensive survey on deep facial expression recognition: challenges, applications, and fu ture guidelines" Alexandria Engineering Journal 68: 817–840. DOI: 10.1016/j.aej.2023.01.017.
  2. [2] J. Yu, H. Li, S.-L. Yin, and S. Karim, (2020) “Dynamic gesture recognition based on deep learning in human-to computer interfaces" Journal of Applied Science and Engineering 23(1): 31–38. DOI: 10.6180/jase.202003_ 23(1).0004.
  3. [3] M.Gantchoff, D. Beyer Jr, J. Erb, D. MacFarland, D. Norton, B. Roell, J. Price Tack, and J. Belant, (2022) “Distribution model transferability for a wide-ranging species, the Gray Wolf" Scientific Reports 12(1): 13556. DOI: 10.1038/s41598-022-16121-6.
  4. [4] S. Yin and H. Li, (2020) “Hot region selection based on selective search and modified fuzzy C-means in remote sensing images" IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13: 5862–5871. DOI: 10.1109/JSTARS.2020.3025582.
  5. [5] M.RaiandP. Rivas. “A review of convolutional neu ral networks and gabor filters in object recognition”. In: 2020 International Conference on Computational Sci ence and Computational Intelligence (CSCI). IEEE. 2020, 1560–1567. DOI: 10.1109/CSCI51800.2020.00289.
  6. [6] J. Naranjo-Torres, M. Mora, R. Hernández-García, R. J. Barrientos, C. Fredes, and A. Valenzuela, (2020) “A review of convolutional neural network applied to fruit image processing" Applied Sciences 10(10): 3443. DOI: 10.3390/app10103443.
  7. [7] L. Wang, Y. Shoulin, H. Alyami, A. A. Laghari, M. Rashid, J. Almotiri, H. J. Alyamani, and F. Alturise. Anovel deep learning-based single shot multibox detector model for object detection in optical remote sensing images. 2024. DOI: 10.1002/gdj3.162.
  8. [8] H.Yu, L. T. Yang, Q. Zhang, D. Armstrong, and M. J. Deen, (2021) “Convolutional neural networks for medi cal image analysis: state-of-the-art, comparisons, improve ment and perspectives" Neurocomputing 444: 92–110. DOI: 10.1016/j.neucom.2020.04.157.
  9. [9] P. K. Srivastava, G. Singh, S. Kumar, N. K. Jain, and V. Bali, (2024) “Gabor filter and centre symmetric-local binary pattern based technique for forgery detection in images" Multimedia Tools and Applications 83(17): 50157–50195. DOI: 10.1007/s11042-023-17485-1.
  10. [10] Y. Zhang, W. Chan, and N. Jaitly. “Very deep con volutional networks for end-to-end speech recogni tion”. In: 2017 IEEE international conference on acous tics, speech and signal processing (ICASSP). IEEE. 2017, 4845–4849. DOI: 10.1109/ICASSP.2017.7953077.
  11. [11] S. Yin, H. Li, A. A. Laghari, T. R. Gadekallu, G. A. Sampedro, and A. Almadhor, (2024) “An Anomaly Detection Model Based on Deep Auto-Encoder and Cap sule Graph Convolution via Sparrow Search Algorithm in 6G Internet of Everything" IEEE Internet of Things Journal 11(18): 29402–29411. DOI: 10.1109/JIOT.2024.3353337.
  12. [12] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu. “Ghostnet: More features from cheap operations”. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020, 1580–1589. DOI: 10.1109/CVPR42600.2020.00165.
  13. [13] L. Zhang, N. Zhang, R. Shi, G. Wang, Y. Xu, and Z. Chen, (2023) “Sg-det: Shuffle-ghostnet-based detector for real-time maritime object detection in uav images" Re mote Sensing 15(13): 3365. DOI: 10.3390/rs15133365.
  14. [14] S. Zhang, C. Qu, C. Ru, X. Wang, and Z. Li, (2023) “Multi-objects recognition and self-explosion defect detec tion method for insulators based on lightweight GhostNet YOLOV4model deployed onboard UAV" IEEE Access 11: 39713–39725. DOI: 10.1109/ACCESS.2023.3268708.
  15. [15] F. Wang, M. Jiang, C. Qian, S. Yang, C. Li, H. Zhang, X. Wang, and X. Tang. “Residual attention network for image classification”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2017, 3156–3164. DOI: 10.1109/CVPR.2017.683.
  16. [16] K. Dong, C. Zhou, Y. Ruan, and Y. Li. “MobileNetV2 model for image classification”. In: 2020 2nd Interna tional Conference on Information Technology and Com puter Application (ITCA). IEEE. 2020, 476–480. DOI: 10.1109/ITCA52113.2020.00106.
  17. [17] L. Teng and Y. Qiao, (2022) “BiSeNet-oriented context attention model for image semantic segmentation" Com puter Science and Information Systems 19(3): 1409 1426. DOI: 10.2298/CSIS220321040T.
  18. [18] B. Singh, S. Patel, A. Vijayvargiya, and R. Kumar, (2023) “Analyzing the impact of activation functions on the performance of the data-driven gait model" Results in Engineering 18: 101029. DOI: 10.1016/j.rineng. 2023.101029.
  19. [19] Z. Lv. “Facial expression recognition method based on dual-branch fusion network with noisy labels”. In: 2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC). 7. IEEE. 2024, 1608–1612. DOI: 10.1109/IAEAC59436. 2024.10504071.
  20. [20] A. Greco, N. Strisciuglio, M. Vento, and V. Vigilante, (2023) “Benchmarking deep networks for facial emotion recognition in the wild" Multimedia tools and appli cations 82(8): 11189–11220. DOI: 10.1007/s11042-022 12790-7.
  21. [21] D. Chen, G. Wen, H. Li, R. Chen, and C. Li, (2023) “Multi-relations aware network for in-the-wild facial ex pression recognition" IEEE Transactions on Circuits and Systems for Video Technology 33(8): 3848–3859. DOI: 10.1109/TCSVT.2023.3234312.
  22. [22] A.A.Alhussan,F.M.Talaat,E.-S.M.El-kenawy,A.A. Abdelhamid, A. Ibrahim, D. S. Khafaga, and M. Al naggar, (2023) “Facial Expression Recognition Model De pending on Optimized Support Vector Machine." Com puters, Materials & Continua 76(1): DOI: 10.32604/cmc.2023.039368.
  23. [23] R. Singh, S. Saurav, T. Kumar, R. Saini, A. Vohra, and S. Singh, (2023) “Facial expression recognition in videos using hybrid CNN & ConvLSTM" International Journal of Information Technology 15(4): 1819–1830. DOI: 10.1007/s41870-023-01183-0.
  24. [24] Y. Wu, L. Zhang, Z. Gu, H. Lu, and S. Wan, (2023) “Edge-AI-driven framework with efficient mobile network design for facial expression recognition" ACM Transac tions on Embedded Computing Systems 22(3): 1–17. DOI: 10.1145/3587038.
  25. [25] G. Kou, D. Pamucar, H. Dinçer, and S. Yüksel, (2023) “From risks to rewards: A comprehensive guide to sustain able investment decisions in renewable energy using a hybrid facial expression-based fuzzy decision-making ap proach" Applied Soft Computing 142: 110365. DOI: 10.1016/j.asoc.2023.110365.
  26. [26] S. Hossain, S. Umer, R. K. Rout, and M. Tanveer, (2023) “Fine-grained image analysis for facial expression recognition using deep convolutional neural networks with bilinear pooling" Applied Soft Computing 134: 109997. DOI: 10.1016/j.asoc.2023.109997.
  27. [27] X. Chen, X. Zheng, K. Sun, W. Liu, and Y. Zhang, (2023) “Self-supervised vision transformer-based few-shot learning for facial expression recognition" Information Sciences 634: 206–226. DOI: 10.1016/j.ins.2023.03.105.
  28. [28] R. Febrian, B. M. Halim, M. Christina, D. Ramdhan, and A. Chowanda, (2023) “Facial expression recogni tion using bidirectional LSTM-CNN" Procedia Com puter Science 216: 39–47. DOI: 10.1016/j.procs.2022.12.109.
  29. [29] N.KumarHN,A.S.Kumar,G.PrasadMS,andM.A. Shah, (2023) “Automatic facial expression recognition combining texture and shape features from prominent facial regions" IET Image Processing 17(4): 1111–1125. DOI: 10.1049/ipr2.12700.
  30. [30] C. Liu, K. Hirota, and Y. Dai, (2023) “Patch atten tion convolutional vision transformer for facial expression recognition with occlusion" Information Sciences 619: 781–794. DOI: 10.1016/j.ins.2022.11.068.


Latest Articles

    



 

2.1
2023CiteScore
 
 
69th percentile
Powered by  Scopus

SCImago Journal & Country Rank

Enter your name and email below to receive latest published articles in Journal of Applied Science and Engineering.