液晶与显示
液晶與顯示
액정여현시
Chinese Journal of Liquid Crystals and Displays
2015年
6期
1016-1023
,共8页
人脸识别%特征抽取%双向二维线性判别分析%子模式双向二维线性判别分析
人臉識彆%特徵抽取%雙嚮二維線性判彆分析%子模式雙嚮二維線性判彆分析
인검식별%특정추취%쌍향이유선성판별분석%자모식쌍향이유선성판별분석
face recognition%feature extraction%two-directional two-dimensional linear discriminant
针对表情和光照变化等对人脸识别影响的问题,提出一种基于子模式双向二维线性判别分析(Sub-pattern two-di-rectional two-dimensional linear discriminant analysis,Sp-(2D)2 LDA)的人脸识别方法。该方法首先对原图像进行分块处理,并保持子块间的空间关系,然后对各个子训练样本集从行方向和列方向同时利用2DLDA 进行特征抽取,最后把各个子特征矩阵拼接成一对应原始图像的特征矩阵,并采用最近邻分类器进行分类识别。在 ORL 及 Yale 人脸库上的试验结果表明,Sp-(2D)2 LDA 有效降低了鉴别特征的维数,减少了表情和光照变化的影响,获得了较好的识别性能。
針對錶情和光照變化等對人臉識彆影響的問題,提齣一種基于子模式雙嚮二維線性判彆分析(Sub-pattern two-di-rectional two-dimensional linear discriminant analysis,Sp-(2D)2 LDA)的人臉識彆方法。該方法首先對原圖像進行分塊處理,併保持子塊間的空間關繫,然後對各箇子訓練樣本集從行方嚮和列方嚮同時利用2DLDA 進行特徵抽取,最後把各箇子特徵矩陣拼接成一對應原始圖像的特徵矩陣,併採用最近鄰分類器進行分類識彆。在 ORL 及 Yale 人臉庫上的試驗結果錶明,Sp-(2D)2 LDA 有效降低瞭鑒彆特徵的維數,減少瞭錶情和光照變化的影響,穫得瞭較好的識彆性能。
침대표정화광조변화등대인검식별영향적문제,제출일충기우자모식쌍향이유선성판별분석(Sub-pattern two-di-rectional two-dimensional linear discriminant analysis,Sp-(2D)2 LDA)적인검식별방법。해방법수선대원도상진행분괴처리,병보지자괴간적공간관계,연후대각개자훈련양본집종행방향화렬방향동시이용2DLDA 진행특정추취,최후파각개자특정구진병접성일대응원시도상적특정구진,병채용최근린분류기진행분류식별。재 ORL 급 Yale 인검고상적시험결과표명,Sp-(2D)2 LDA 유효강저료감별특정적유수,감소료표정화광조변화적영향,획득료교호적식별성능。
To reduce the impacts of the variations of expression and illumination,a novel face recogni-tion method based on sub-pattern two-directional two-dimensional linear discriminant analysis (Sp-(2D)2 LDA)is presented in this paper.Firstly,Sp-(2D)2 LDA divides the original images into smaller sub-images and keeps the spatial relationship between the sub-images.Secondly,it simultaneously ap-plies 2DLDA to the subsets of the training samples in the row and column directions to extract local sub-features.Finally,the sub-features are synthesized into global features and nearest neighbor clas-sifier is used for classification.The experimental results on Yale and ORL face databases show that the proposed Sp-(2D)2 LDA method effectively reduce not only the dimension of the eigenvectors,but also the influence of variations in illumination and facial expression.Thus,the proposed method has better classification performances than the other related methods.