Enhanced infrared and visible image fusion using a fast guided filter and an improved visual saliency map
Main Article Content
Abstract
The fusion of infrared (IR) and visible images aims to generate a composite image that is both highly informative and visually optimized for human perception or computer vision applications. This paper presents an innovative fusion approach designed to enhance the effectiveness of this process. The method begins by utilizing the Fast Guided Filter (FGF) to decompose input images into base and detail components. An improved visual saliency map technique is introduced to efficiently fuse the base components, while a local energy-based fusion strategy is applied to the detail components. Experimental evaluations demonstrate that the proposed approach outperforms conventional methods, delivering superior results in both subjective and objective assessments.
Article Details
Keywords
Fast guided filter (FGF), visual saliency map (VSM), image fusion (IF)
References
He, K., & Microsoft, J. S. (2015). Fast Guided Filter. https://arxiv.org/abs/1505.00996v1
Li, H., & Wu, X. J. (2019). DenseFuse: A fusion approach to infrared and visible images. IEEE Transactions
on Image Processing, 28(5), 2614-2623. https://doi.org/10.1109/TIP.2018.2887342
Li, S., Kang, X., & Hu, J. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing,
22(7), 2864–2875. https://doi.org/10.1109/TIP.2013.2244222
Ma, J., Ma, Y., & Li, C. (2019). Infrared and visible image fusion methods and applications: A survey.
Information Fusion, 45, 153–178. https://doi.org/10.1016/J.INFFUS.2018.02.004
Ma, J., Yu, W., Liang, P., Li, C., & Jiang, J. (2019). FusionGAN: A generative adversarial network for infrared
and visible image fusion. Information Fusion, 48, 11–26. https://doi.org/10.1016/J.INFFUS.2018.09.004
Ma, J., Zhou, Z., Wang, B., & Zong, H. (2017). Infrared and visible image fusion based on visual saliency map
and weighted least square optimization. Infrared Physics & Technology, 82, 8–17.
https://doi.org/10.1016/J.INFRARED.2017.02.005
Meher, B., Agrawal, S., Panda, R., & Abraham, A. (2019). A survey on region based image fusion methods.
Information Fusion, 48, 119–132. https://doi.org/10.1016/j.inffus.2018.07.010
Panigrahy, C., Seal, A., Gonzalo-Martín, C., Pathak, P., & Jalal, A. S. (2023). Parameter adaptive unit-linking
pulse coupled neural network based MRI–PET/SPECT image fusion. Biomedical Signal Processing and
Control, 83, 104659. https://doi.org/10.1016/J.BSPC.2023.104659
Tang, L., Yuan, J., Zhang, H., Jiang, X., & Ma, J. (2022). PIAFusion: A progressive infrared and visible image
fusion network based on illumination aware. Information Fusion, 83–84, 79–92.
https://doi.org/10.1016/J.INFFUS.2022.03.007
Yang, K., Xiang, W., Chen, Z., Zhang, J., & Liu, Y. (2024). A review on infrared and visible image fusion
algorithms based on neural networks. Journal of Visual Communication and Image Representation, 101,
104179. https://doi.org/10.1016/J.JVCIR.2024.104179
Zhang, H., & Ma, J. (2021). SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image
Fusion. International Journal of Computer Vision, 129(10), 2761–2785.
https://doi.org/10.1007/S11263-021-01501-8
Li, H., & Wu, X. J. (2019). DenseFuse: A fusion approach to infrared and visible images. IEEE Transactions
on Image Processing, 28(5), 2614-2623. https://doi.org/10.1109/TIP.2018.2887342
Li, S., Kang, X., & Hu, J. (2013). Image fusion with guided filtering. IEEE Transactions on Image Processing,
22(7), 2864–2875. https://doi.org/10.1109/TIP.2013.2244222
Ma, J., Ma, Y., & Li, C. (2019). Infrared and visible image fusion methods and applications: A survey.
Information Fusion, 45, 153–178. https://doi.org/10.1016/J.INFFUS.2018.02.004
Ma, J., Yu, W., Liang, P., Li, C., & Jiang, J. (2019). FusionGAN: A generative adversarial network for infrared
and visible image fusion. Information Fusion, 48, 11–26. https://doi.org/10.1016/J.INFFUS.2018.09.004
Ma, J., Zhou, Z., Wang, B., & Zong, H. (2017). Infrared and visible image fusion based on visual saliency map
and weighted least square optimization. Infrared Physics & Technology, 82, 8–17.
https://doi.org/10.1016/J.INFRARED.2017.02.005
Meher, B., Agrawal, S., Panda, R., & Abraham, A. (2019). A survey on region based image fusion methods.
Information Fusion, 48, 119–132. https://doi.org/10.1016/j.inffus.2018.07.010
Panigrahy, C., Seal, A., Gonzalo-Martín, C., Pathak, P., & Jalal, A. S. (2023). Parameter adaptive unit-linking
pulse coupled neural network based MRI–PET/SPECT image fusion. Biomedical Signal Processing and
Control, 83, 104659. https://doi.org/10.1016/J.BSPC.2023.104659
Tang, L., Yuan, J., Zhang, H., Jiang, X., & Ma, J. (2022). PIAFusion: A progressive infrared and visible image
fusion network based on illumination aware. Information Fusion, 83–84, 79–92.
https://doi.org/10.1016/J.INFFUS.2022.03.007
Yang, K., Xiang, W., Chen, Z., Zhang, J., & Liu, Y. (2024). A review on infrared and visible image fusion
algorithms based on neural networks. Journal of Visual Communication and Image Representation, 101,
104179. https://doi.org/10.1016/J.JVCIR.2024.104179
Zhang, H., & Ma, J. (2021). SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image
Fusion. International Journal of Computer Vision, 129(10), 2761–2785.
https://doi.org/10.1007/S11263-021-01501-8