南京大学学报(自然科学版)
南京大學學報(自然科學版)
남경대학학보(자연과학판)
JOURNAL OF NANJING UNIVERSITY(NATURAL SCIENCES)
2015年
1期
125-131
,共7页
显著性区域%光线特征%流行排序%融合
顯著性區域%光線特徵%流行排序%融閤
현저성구역%광선특정%류행배서%융합
saliency areas%different illumination%manifold ranking%fusion
图像的显著区域提取是指利用人的视觉特点和习惯,获取图像中最易引起注意的区域。该技术被广泛应用于视觉分析的各个领域,是近几年研究的热点。当前显著性区域提取的方法大多基于颜色对比的基础上进行检测,这种方法只是大概检测出显著性区域的范围,不够精细。在对图像进行显著性区域提取的时候,光线也应该占有很重要的地位。为了更好的提取图像的显著性区域,本文提出一个融合光线的特征的模型进行显著性区域的提取。首先对每幅图像进行光线衰竭和增强的变化,生成不同光线特征的图像;然后对每幅不同光线条件下的图像利用流行排序计算显著性区域;最后针对多个显著性区域的结果进行融合计算,得到图像的显著性区域结果。该算法在公开图像数据库进行的试验验证标明,其结果优于同类的算法。
圖像的顯著區域提取是指利用人的視覺特點和習慣,穫取圖像中最易引起註意的區域。該技術被廣汎應用于視覺分析的各箇領域,是近幾年研究的熱點。噹前顯著性區域提取的方法大多基于顏色對比的基礎上進行檢測,這種方法隻是大概檢測齣顯著性區域的範圍,不夠精細。在對圖像進行顯著性區域提取的時候,光線也應該佔有很重要的地位。為瞭更好的提取圖像的顯著性區域,本文提齣一箇融閤光線的特徵的模型進行顯著性區域的提取。首先對每幅圖像進行光線衰竭和增彊的變化,生成不同光線特徵的圖像;然後對每幅不同光線條件下的圖像利用流行排序計算顯著性區域;最後針對多箇顯著性區域的結果進行融閤計算,得到圖像的顯著性區域結果。該算法在公開圖像數據庫進行的試驗驗證標明,其結果優于同類的算法。
도상적현저구역제취시지이용인적시각특점화습관,획취도상중최역인기주의적구역。해기술피엄범응용우시각분석적각개영역,시근궤년연구적열점。당전현저성구역제취적방법대다기우안색대비적기출상진행검측,저충방법지시대개검측출현저성구역적범위,불구정세。재대도상진행현저성구역제취적시후,광선야응해점유흔중요적지위。위료경호적제취도상적현저성구역,본문제출일개융합광선적특정적모형진행현저성구역적제취。수선대매폭도상진행광선쇠갈화증강적변화,생성불동광선특정적도상;연후대매폭불동광선조건하적도상이용류행배서계산현저성구역;최후침대다개현저성구역적결과진행융합계산,득도도상적현저성구역결과。해산법재공개도상수거고진행적시험험증표명,기결과우우동류적산법。
Saliency detection,the task of which is to detect obj ects attracted by the human visual system in an image or video,gains much attention in recent years,and numerous saliency models have been proposed in the literatures. Most models neglect the illumination invariant features of an image which are very important to the final detection result when we detect the saliency areas in the image.In this paper,we propose a novel framework which is based on different light conditions to improve the accuracy of the saliency detection.The proposed algorithm is divided into three steps.Firstly,we reduce or increase the illumination of the image step by step to generate the images under different illumination conditions in the color space of HSL because of the missing to the information of light in the color space of RGB.This step is very important which provides materials for the back of the work.Secondly,each image obtained by step 1 is segmented into superpixels.We exploit Manifold Ranking to generate the saliency map according to its effectiveness and efficientness.This algorithm based on Manifold Ranking completes the saliency detection of the image by using the potential manifold distribution structure in the feature space and the characteristics of background and target.This operation can detect the saliency area of every image under different light conditions.At last,to combine the saliency maps generated by different illumination conditions by using the prior knowledge,we exploit the fusion method to integrate these cues.The result of the method in this paper which combines the illumination invariant features of the image not only increased the significant areas of information,but also improved the accuracy of the significant areas.The experiments on the benchmark dataset are to do the saliency detection by comparing our method with several prior ones.The analysis of the results in the experiments shows that the proposed saliency detection model outperforms the other state-of-the-art algorithms in terms of accuracy and ro-bustness.