Qian Mei This email address is being protected from spambots. You need JavaScript enabled to view it.1 and Mengfan Li1
1Zhengzhou Railway Vocational & Technical College, Zhengzhou 450000, China
Received: March 7, 2022 Accepted: April 24, 2022 Publication Date: May 13, 2022
Copyright The Author(s). This is an open access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are cited.
In order to improve the quality of medical image fusion, preserve the spectral characteristics of the original image and avoid spectral degradation of the fused image, we propose a new medical image fusion method based on the nonsubsampled contourlet transform (NSCT) and adaptive pulse coupled neural network (PCNN). NSST is used to decompose the source image into high and low-frequency. The improved PCNN is used to fuse the low-frequency sub-band coefficients, and the square error sum of pixels is used as the excitation factor. The sum of directional gradients is selected as the link strength. The high-frequency sub-band coefficients with large computational load are fused by using the feature-based rule. The fused image is obtained by NSST inverse transformation. Experimental results show that compared with the NSCT and the other combinations of NSST and PCNN algorithms, the proposed algorithm can obtain better results, and the running time is shorter. The average values of information entropy, spatial frequency, standard deviation, clarity, edge information with proposed method are 6.6191, 23.3014, 64.3961, 9.4683, and 0.7213, respectively.
Keywords: NSCT, adaptive PCNN, feature-based rule, medical image fusion
REFERENCES
[1] N. J. Tustison, P. A. Cook, A. J. Holbrook, H. J. Johnson, J. Muschelli, G. A. Devenyi, J. T. Duda, S. R. Das, N. C. Cullen, D. L. Gillen, et al., (2021) “The ANTsX ecosystem for quantitative biological and medical imaging" Scientific reports 11(1): 1–13. DOI: 10.1038/s41598-021-87564-6.
[2] L. Liu, L. Wang, D. Xu, H. Zhang, A. Sharma, S. Tiwari, M. Kaur, M. Khurana, and M. A. Shah, (2021) “CT image segmentation method of liver tumor based on artificial intelligence enabled medical imaging" Mathematical Problems in Engineering 2021: DOI: 10.1155/2021/9919507.
[3] S. Maqsood and U. Javed, (2020) “Multi-modal medical image fusion based on two-scale image decomposition and sparse representation" Biomedical Signal Processing and Control 57: 101810. DOI: 10.1016/j.bspc.2019.101810.
[4] S. Yin, H. Li, D. Liu, and S. Karim, (2020) “Active contour modal based on density-oriented BIRCH clustering method for medical image segmentation" Multimedia Tools and Applications 79(41): 31049–31068. DOI: 10.1007/s11042-020-09640-9.
[5] V. Bhateja, H. Patel, A. Krishn, A. Sahu, and A. Lay-Ekuakille, (2015) “Multimodal medical image sensor fusion framework using cascade of wavelet and contourlet transform domains" IEEE Sensors Journal 15(12): 6783–6790. DOI: 10.1109/JSEN.2015.2465935.
[6] S. Yu, R. Enen, D. Jian-Wu,W. Guo-Hua, and F. Xin, (2013) “A nonsubsampled contourlet transform based medical image fusion method" Information technology journal 12(4): 749. DOI: 10.3923/itj.2013.749.755.
[7] R. Srivastava and A. Khare. “Medical image fusion using local energy in nonsubsampled contourlet transform domain”. In: Computational Vision and Robotics. Springer, 2015, 29–35. DOI: 10.1007/978-81-322-2196-8_4.
[8] Y. C. Yang, J.W. Dang, and Y. P.Wang. “An Improved Medical Image Fusion Method Based on Nonsubsampled Contourlet Transform”. In: Advanced Materials Research. 989. Trans Tech Publ. 2014, 1082–1087. DOI: 10.4028/www.scientific.net/AMR.989-994.1082.
[9] L. Li and H. Ma, (2021) “Pulse coupled neural network based multimodal medical image fusion via guided filtering and WSEML in NSCT domain" Entropy 23(5): 591. DOI: 10.3390/e23050591.
[10] Q. Shi, S. Yin, K. Wang, L. Teng, and H. Li, (2021) “Multichannel convolutional neural network-based fuzzy active contour model for medical image segmentation" Evolving Systems: 1–15. DOI: 10.1007/s12530-021-09392-3.
[11] W. Kong and J. Ma. “Medical image fusion using non-subsampled shearlet transform and improved PCNN”. In: International Conference on Intelligent Science and Big Data Engineering. Springer. 2018, 635–645. DOI: 10.1007/978-3-030-02698-1_55.
[12] X. Jin, R. Nie, D. Zhou, Q. Wang, and K. He, (2016) “Multifocus color image fusion based on NSST and PCNN" Journal of Sensors 2016: DOI: 10.1155/2016/8359602.
[13] P. Ganasala and V. Kumar, (2016) “Feature-motivated simplified adaptive PCNN-based medical image fusion algorithm in NSST domain" Journal of digital imaging 29(1): 73–85. DOI: 10.1007/s10278-015-9806-4.
[14] S. Ding, X. Zhao, H. Xu, Q. Zhu, and Y. Xue, (2018) “NSCT-PCNN image fusion based on image gradient motivation" IET Computer Vision 12(4): 377–383. DOI: 10.1049/iet-cvi.2017.0285.
[15] C. Liu, S.-q. Chen, and Q. Fu, (2013) “Multi-modality image fusion using the nonsubsampled contourlet transform" IEICE TRANSACTIONS on Information and Systems 96(10): 2215–2223. DOI: 10.1587/transinf.E96.D.2215.
[16] H. Ahmad, S. K. Kim, J. H. Park, and S. Y. Jung, (2022) “Development of two-phase flow regime map for thermally stimulated flows using deep learning and image segmentation technique" International Journal of Multiphase Flow 146: 103869. DOI: 10.1016/j.ijmultiphaseflow.2021.103869.
[17] Q. Zhou, R. Wang, H. Hu, Q. Tan, and W. Zhang, (2022) “Referring image segmentation with attention guided cross modal fusion for semantic oriented languages" Frontiers of Computer Science 16(6): 1–3. DOI: 10.1007/s11704-022-1136-3.
[18] S. Yin, L. Meng, J. Liu, et al., (2019) “A new apple segmentation and recognition method based on modified fuzzy C-means and hough transform" Journal of Applied Science and Engineering 22(2): 349–354. DOI: 10.6180/jase.201906_22(2).0016.
[19] Y. Yao, S. Liu, M. P. Planche, S. Deng, and H. Liao, (2022) “Application of Image Segmentation to Identify Inflight Particles in Thermal Spraying" Journal of Thermal Spray Technology: 1–13. DOI: 10.1007/s11666-021-01285-w.
[20] D. Zhai, B. Hu, X. Gong, H. Zou, and J. Luo, (2022) “ASS-GAN: Asymmetric Semi-supervised GAN for Breast Ultrasound Image Segmentation" Neurocomputing:
We use cookies on this website to personalize content to improve your user experience and analyze our traffic. By using this site you agree to its use of cookies.