Yu JiangThis email address is being protected from spambots. You need JavaScript enabled to view it.
Shenyang Normal University, No. 253 Huanghe North Street, Shenyang 110034, China
Received: November 5, 2023 Accepted: November 24, 2023 Publication Date: February 7, 2024
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
In the traditional space art, the idea of the unity of heaven and man and the connection between heaven and man has been fully affirmed and developed in the long history. In the architectural space, it is often reflected in the geographical location because of the weather, and it should find its own appropriate posture in nature, rather than compete with him. In traditional architectural planning and design, it advocates the organic view of nature that emphasizes the integrity of nature and the internal relationship between things, and pays attention to the combination of Yin and Yang, so as to unify man and nature and self. In particular, image classification plays a very important role in art design, because images are part of art. As the depth of the network deepens, the detail information of the image is lost, which leads to poor segmentation of the edge semantics of objects and objects, and inaccurate prediction of pixel categories in intelligent collaborative robot design. Therefore, this paper proposes a novel DABU-Net model based on principle component analysis for intelligent collaborative robot design. First, we adopt DABU-Net model to extract region of interest in image. Second, the improved channel attention module and spatial attention module extract more important feature details from channel and space respectively. A new image dimensionality reduction algorithm is designed by using principal component analysis, and the algorithm is added to the spatial self-attention module to improve its computing power. Finally, experiments on open data sets show that the proposed method has better segmentation efficiency than other advanced methods.
[1] Y. Guo, Y. Liu, T. Georgiou, and M. S. Lew, (2018) “A review of semantic segmentation using deep neural networks" International journal of multimedia information retrieval 7: 87–93. DOI: 10.1007/s13735-017-0141-z.
[2] S. Hao, Y. Zhou, and Y. Guo, (2020) “A brief survey on semantic segmentation with deep learning" Neurocomputing 406: 302–321. DOI: 10.1016/j.neucom.2019.11.118.
[3] S. Yin and H. Li, (2020) “Hot region selection based on selective search and modified fuzzy C-means in remote sensing images" IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13: 5862–5871. DOI: 10.1109/JSTARS.2020.3025582.
[4] H. Zhou, K. Song, X. Zhang, W. Gui, and Q. Qian, (2019) “WAILS: Watershed algorithm with image-level supervision for weakly supervised semantic segmentation" IEEE Access 7: 42745–42756. DOI: 10.1109/ACCESS.2019.2908216.
[5] J. Long, E. Shelhamer, and T. Darrell. “Fully convolutional networks for semantic segmentation”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015, 3431–3440. DOI: 10.1109/CVPR.2015.7298965.
[6] X. Long, W. Zhang, and B. Zhao, (2020) “PSPNetSLAM: a semantic SLAM detect dynamic object by pyramid scene parsing network" IEEE Access 8: 214685– 214695. DOI: 10.1109/ACCESS.2020.3041038.
[7] C. Wang and J. Chen. “Multi-branch Input Structure for Pyramid Scene Parsing Network”. In: Proceedings of 2019 Chinese Intelligent Systems Conference: Volume II 15th. Springer. 2020, 724–732. DOI: 10.1007/978-981-32-9686-2_80.
[8] L. Chen, X. Xu, L. Pan, J. Cao, and X. Li, (2021) “Realtime lane detection model based on non bottleneck skip residual connections and attention pyramids" Plos one 16(10): e0252755. DOI: 10.1371/journal.pone.0252755.
[9] J. Hu, L. Shen, and G. Sun. “Squeeze-and-excitation networks”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, 7132– 7141. DOI: 10.1109/CVPR.2018.00745.
[10] P. Wu, X. He, M. Tang, Y. Lv, and J. Liu. “Hanet: Hierarchical alignment networks for video-text retrieval”. In: Proceedings of the 29th ACM international conference on Multimedia. 2021, 3518–3527. DOI: 10.1145/3474085.3475515.
[11] S. Woo, J. Park, J.-Y. Lee, and I. S. Kweon. “Cbam: Convolutional block attention module”. In: Proceedings of the European conference on computer vision (ECCV). 2018, 3–19. DOI: 10.1007/978-3-030-01234- 2_1.
[12] Y. Yuan, L. Huang, J. Guo, C. Zhang, X. Chen, and J. Wang, (2018) “Ocnet: Object context network for scene parsing" arXiv preprint arXiv:1809.00916:
[13] A. Jisi, S. Yin, et al., (2021) “A new feature fusion network for student behavior recognition in education" Journal of Applied Science and Engineering 24(2): 133– 140. DOI: 10.6180/jase.202104_24(2).0002.
[14] P. Wei and B. Wang, (2020) “Food image classification and image retrieval based on visual features and machine learning" Multimedia Systems: 1–12. DOI: 10.1007/s00530-020-00673-6.
[15] R. Suresh, P. Dhivya, and N. Bhuvana. “Analysis on image mining techniques”. In: 2017 International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS). IEEE. 2017, 1–5. DOI: 10.1109/ICIIECS.2017.8276027.
[16] F. Zhu, J. Gao, J. Yang, and N. Ye, (2022) “Neighborhood linear discriminant analysis" Pattern Recognition 123: 108422. DOI: 10.1016/j.patcog.2021.108422.
[17] J. Wen, X. Fang, J. Cui, L. Fei, K. Yan, Y. Chen, and Y. Xu, (2018) “Robust sparse linear discriminant analysis" IEEE Transactions on Circuits and Systems for Video Technology 29(2): 390–403. DOI: 10.1109/TCSVT.2018.2799214.
[18] A. Wadhwa and A. Chauhan, (2023) “Investigation and Optimization of Tribological aspects of BabbittIlmenite composite using weighted Grey relation analysis" International Journal of Engineering 36(5): 894–903. DOI: 10.5829/ije.2023.36.05b.06.
[19] J. Guo, Y. Sun, J. Gao, Y. Hu, and B. Yin, (2020) “Robust adaptive linear discriminant analysis with bidirectional reconstruction constraint" ACM Transactions on Knowledge Discovery from Data (TKDD) 14(6): 1–20. DOI: 10.1145/3409478.
[20] S. Li, H. Zhang, R. Ma, J. Zhou, J. Wen, and B. Zhang, (2023) “Linear discriminant analysis with generalized kernel constraint for robust image classification" Pattern Recognition 136: 109196. DOI: 10.1016/j.patcog.2022.109196.
[21] L. Wang, Y. Shoulin, H. Alyami, A. A. Laghari, M. Rashid, J. Almotiri, H. J. Alyamani, and F. Alturise. A novel deep learning-based single shot multibox detector model for object detection in optical remote sensing images. 2022. DOI: 10.1002/gdj3.162.
[22] A. S. Zamani, L. Anand, K. P. Rane, P. Prabhu, A. M. Buttar, H. Pallathadka, A. Raghuvanshi, and B. N. Dugbakie, (2022) “Performance of machine learning and image processing in plant leaf disease detection" Journal of Food Quality 2022: 1–7. DOI: 10.1155/2022/1598796.
[23] F. Fang, Q. Xu, Y. Cheng, Y. Sun, and J.-H. Lim, (2022) “Image Understanding With Reinforcement Learning: Auto-Tuning Image Attributes and Model Parameters for Object Detection and Segmentation" IEEE Transactions on Circuits and Systems for Video Technology 32(10): 6671–6685. DOI: 10.1109/TCSVT.2022.3171781.
[24] Z. Zhang, Y. Jiang, H. Qiao, M. Wang, W. Yan, and J. Chen, (2022) “SIL-Net: A Semi-Isotropic L-shaped network for dermoscopic image segmentation" Computers in Biology and Medicine 150: 106146. DOI: 10.1016/j.compbiomed.2022.106146.
[25] R. Mohanty, S. Allabun, S. S. Solanki, S. K. Pani, M. S. Alqahtani, M. Abbas, and B. O. Soufiene, (2023) “NAMSTCD: A Novel Augmented Model for Spinal Cord Segmentation and Tumor Classification Using Deep Nets" Diagnostics 13(8): 1417. DOI: 10.3390/diagnostics13081417.
[26] L. Teng, Y. Qiao, M. Shafiq, G. Srivastava, A. R. Javed, T. R. Gadekallu, and S. Yin, (2023) “FLPK-BiSeNet: Federated Learning Based on Priori Knowledge and Bilateral Segmentation Network for Image Edge Extraction" IEEE Transactions on Network and Service Management: DOI: 10.1109/TNSM.2023.3273991.
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.