红外与激光工程
紅外與激光工程
홍외여격광공정
INFRARED AND LASER ENGINEERING
2014年
3期
961-966
,共6页
图像融合%非下采样均匀离散Curvelet变换%区域分割
圖像融閤%非下採樣均勻離散Curvelet變換%區域分割
도상융합%비하채양균균리산Curvelet변환%구역분할
image fusion%nonsubsampled uniform discrete curvelet transform%region segmentation
针对同一场景的红外与可见光图像,提出了基于非下采样均匀离散Curvelet变换(NSUDCT)的图像融合方法。首先使用标记控制的分水岭分割(MCWS)算法对源图像进行区域分割,对各分割结果进行叠加得到联合区域图。然后对源图像进行非下采样均匀离散Curvelet分解,分解后的低频系数采用区域对比度和区域标准差作为量测指标进行融合,高频方向系数使用基于局部能量的融合规则进行融合,并对融合系数做一致性检测。最后通过各频带融合系数重建得到融合图像。实验结果表明文中方法取得了比较好的视觉效果和量化数据,相比基于NSUDCT的像素融合方法,此文方法的熵值提高了9.87%,交叉熵减少了68.04%,互信息提高了80%。
針對同一場景的紅外與可見光圖像,提齣瞭基于非下採樣均勻離散Curvelet變換(NSUDCT)的圖像融閤方法。首先使用標記控製的分水嶺分割(MCWS)算法對源圖像進行區域分割,對各分割結果進行疊加得到聯閤區域圖。然後對源圖像進行非下採樣均勻離散Curvelet分解,分解後的低頻繫數採用區域對比度和區域標準差作為量測指標進行融閤,高頻方嚮繫數使用基于跼部能量的融閤規則進行融閤,併對融閤繫數做一緻性檢測。最後通過各頻帶融閤繫數重建得到融閤圖像。實驗結果錶明文中方法取得瞭比較好的視覺效果和量化數據,相比基于NSUDCT的像素融閤方法,此文方法的熵值提高瞭9.87%,交扠熵減少瞭68.04%,互信息提高瞭80%。
침대동일장경적홍외여가견광도상,제출료기우비하채양균균리산Curvelet변환(NSUDCT)적도상융합방법。수선사용표기공제적분수령분할(MCWS)산법대원도상진행구역분할,대각분할결과진행첩가득도연합구역도。연후대원도상진행비하채양균균리산Curvelet분해,분해후적저빈계수채용구역대비도화구역표준차작위량측지표진행융합,고빈방향계수사용기우국부능량적융합규칙진행융합,병대융합계수주일치성검측。최후통과각빈대융합계수중건득도융합도상。실험결과표명문중방법취득료비교호적시각효과화양화수거,상비기우NSUDCT적상소융합방법,차문방법적적치제고료9.87%,교차적감소료68.04%,호신식제고료80%。
Aiming at the infrared and visible images in a same scene, a novel fusion algorithm based on the nonsubsampled uniform discrete curvelet transform (NSUDCT) was proposed. First, the source images were segmented using the marker controlled watershed segmentation (MCWS), and the joint region graph was obtained by superimposing the segmented results. Then, the nonsubsampled uniform discrete Curvelet transform was applied to the source images, the low-frequency coefficients were fused with the measurement of ratio of region contrast and region standard deviation, the high-frequency directional coefficients were fused with the local energy fusion rule, and the consistency of the fused coefficients was examined. Finally, the fused image was reconstructed from the subband fused coefficients. The experiment results indicate that the proposed method could provide better fusion quality in terms of both visual and quantified measure. Compared with the pixel fusion method based on NSUDCT, the Entropy of fused images increased by 9.87%, the Cross Entropy decreased by 68.04% and the Mutual Information increased by 80%.