农业工程学报
農業工程學報
농업공정학보
2014年
11期
173-179
,共7页
农作物%图像处理%图像融合%Contourlet变换%融合规则
農作物%圖像處理%圖像融閤%Contourlet變換%融閤規則
농작물%도상처리%도상융합%Contourlet변환%융합규칙
crops%image processing%image fusion%contourlet transformation%fusion rule
为了更好地实现不同光照条件下的农作物图像融合,在Contourlet变换(contourlet transform, CT)的基础上采用了适合农作物图像的融合规则进行了融合处理。首先,采用 Contourlet 变换对源图像进行多尺度、多方向分解,得到低频子带系数和带通方向子带系数。然后,针对低频子带系数的选择,采用了一种改进的线性加权融合方法,以期减小噪声对融合结果的影响;针对带通方向子带系数的选择,结合人眼视觉特性,采用了一种基于梯度最大化规则的系数选择方案,得到待融合图像的系数。最后,经过 Contourlet 逆变换得到融合图像。与小波变换方法(wavelet transform, WT)进行了融合结果的比较,结果表明,与WT方法相比,该文方法在互信息量(mutual information, MI)、空间频率(spatial frequency, SF)、均方差(mean square error, MSE)、信息熵(entropy, Ent)、相关系数(correlation coefficient, CC)、平均梯度(average gradient, G’)和峰值信噪比(peak signal to noise ratio, PSNR)指标上均有了较大提升,表明利用该方法可以取得优于WT的融合效果;在此基础上,利用CT常见融合规则与文中融合规则进行了比较,同样表明 CT 方法可以有效提高图像融合的效果。研究表明,将文中所采用的融合规则应用于不同光照条件下的农作物图像融合是有效的、可行的。该研究可为变光照条件下的作物图像融合技术提供参考。
為瞭更好地實現不同光照條件下的農作物圖像融閤,在Contourlet變換(contourlet transform, CT)的基礎上採用瞭適閤農作物圖像的融閤規則進行瞭融閤處理。首先,採用 Contourlet 變換對源圖像進行多呎度、多方嚮分解,得到低頻子帶繫數和帶通方嚮子帶繫數。然後,針對低頻子帶繫數的選擇,採用瞭一種改進的線性加權融閤方法,以期減小譟聲對融閤結果的影響;針對帶通方嚮子帶繫數的選擇,結閤人眼視覺特性,採用瞭一種基于梯度最大化規則的繫數選擇方案,得到待融閤圖像的繫數。最後,經過 Contourlet 逆變換得到融閤圖像。與小波變換方法(wavelet transform, WT)進行瞭融閤結果的比較,結果錶明,與WT方法相比,該文方法在互信息量(mutual information, MI)、空間頻率(spatial frequency, SF)、均方差(mean square error, MSE)、信息熵(entropy, Ent)、相關繫數(correlation coefficient, CC)、平均梯度(average gradient, G’)和峰值信譟比(peak signal to noise ratio, PSNR)指標上均有瞭較大提升,錶明利用該方法可以取得優于WT的融閤效果;在此基礎上,利用CT常見融閤規則與文中融閤規則進行瞭比較,同樣錶明 CT 方法可以有效提高圖像融閤的效果。研究錶明,將文中所採用的融閤規則應用于不同光照條件下的農作物圖像融閤是有效的、可行的。該研究可為變光照條件下的作物圖像融閤技術提供參攷。
위료경호지실현불동광조조건하적농작물도상융합,재Contourlet변환(contourlet transform, CT)적기출상채용료괄합농작물도상적융합규칙진행료융합처리。수선,채용 Contourlet 변환대원도상진행다척도、다방향분해,득도저빈자대계수화대통방향자대계수。연후,침대저빈자대계수적선택,채용료일충개진적선성가권융합방법,이기감소조성대융합결과적영향;침대대통방향자대계수적선택,결합인안시각특성,채용료일충기우제도최대화규칙적계수선택방안,득도대융합도상적계수。최후,경과 Contourlet 역변환득도융합도상。여소파변환방법(wavelet transform, WT)진행료융합결과적비교,결과표명,여WT방법상비,해문방법재호신식량(mutual information, MI)、공간빈솔(spatial frequency, SF)、균방차(mean square error, MSE)、신식적(entropy, Ent)、상관계수(correlation coefficient, CC)、평균제도(average gradient, G’)화봉치신조비(peak signal to noise ratio, PSNR)지표상균유료교대제승,표명이용해방법가이취득우우WT적융합효과;재차기출상,이용CT상견융합규칙여문중융합규칙진행료비교,동양표명 CT 방법가이유효제고도상융합적효과。연구표명,장문중소채용적융합규칙응용우불동광조조건하적농작물도상융합시유효적、가행적。해연구가위변광조조건하적작물도상융합기술제공삼고。
In the research field of agricultural crop growth state monitoring, it is difficult to capture an image that can meet the needed usage for a whole description of a crop’s growth information. Image fusion could combine two or more source images into a single composite image with extended information, and it is very useful for a crop’s monitoring system. Aiming at solving the fusion problem of crop images in different light conditions, a proper image fusion algorithm based on contourlet transformation (CT) theory was carried out. First, by using CT, all the source images were decomposed into multi-scale and multi-direction sub-bands; then, for the low frequency coefficients, linear weighted fusion rules were adopted to reduce the influence of noise. For the band-pass directional sub-band coefficients, a maximum gradient rule was used to meet the human visual characteristics. Finally, the inverse transformation of contourlet was used to get the fused image. To testify as to the performance of the algorithm, cucumbers, cherry tomatoes, eggplant, and peppers captured in different light conditions were used as experimental crops. The experiment was divided into two parts: (1) a performance comparison between CT and WT, (2) a comparison of different fusion results of CT. A comparative test by using WT method showed that the proposed method could get much better performance than wavelet based fusion methods and commonly used fusion rules. In particular, the MI of fused image was 27.04% higher than the wavelet based method, SF value increased by 37.73%, MSE parameter was 46.97% higher, Ent was also improved by 19.69%, CC had a little enhancement by 2.76%, and G’ was 11.21% higher than the WT based image fusion algorithm. Also, PSNR value was boosted by an average 8.06%. Another comparative experiment with commonly used fusion rules showed that edge strong information was 0.30% higher, structure similarity was promoted by 0.50%, and the average gradient was boosted by 2.63% more than under “linear+max” rules, and entropy value was 5.07% higher than under “min+mean” rules. These experiments all showed that the proposed CT based image fusion algorithm was practical and valid for agricultural product image fusion in different light conditions. This research provides a useful reference for the fusion of crops in different conditions.