智能系统学报
智能繫統學報
지능계통학보
CAAI TRANSACTIONS ON INTELLIGENT SYSTEMS
2014年
5期
577-583
,共7页
动作识别%视角无关%视角空间切分%兴趣点%光流特征%混合特征%隐马尔可夫模型%似然概率加权融合
動作識彆%視角無關%視角空間切分%興趣點%光流特徵%混閤特徵%隱馬爾可伕模型%似然概率加權融閤
동작식별%시각무관%시각공간절분%흥취점%광류특정%혼합특정%은마이가부모형%사연개솔가권융합
action recognition%view-invariant%view-space partitioning%interest points%optical flow%mixed fea-ture%hidden Markov model%likelihood probability weighted fusion
针对日常生活中人体执行动作时存在视角变化而导致难以识别的问题,提出一种基于视角空间切分的多视角空间隐马尔可夫模型( HMM)概率融合的视角无关动作识别算法。该方法首先按照人体相对于摄像机的旋转方向将视角空间分割为多个子空间,然后选取兴趣点视频段词袋特征与分区域的光流特征相融合,形成具有一定视角鲁棒性特征对人体运动信息进行描述,并在每个子视角空间下利用HMM建立各人体动作的模型,最终通过将多视角空间相应的动作模型似然概率加权融合,实现对未知视角动作的识别。利用多视角IXMAS动作识别数据库对该算法进行测试的实验结果表明,该算法实现简单且对未知视角下的动作具有较好识别结果。
針對日常生活中人體執行動作時存在視角變化而導緻難以識彆的問題,提齣一種基于視角空間切分的多視角空間隱馬爾可伕模型( HMM)概率融閤的視角無關動作識彆算法。該方法首先按照人體相對于攝像機的鏇轉方嚮將視角空間分割為多箇子空間,然後選取興趣點視頻段詞袋特徵與分區域的光流特徵相融閤,形成具有一定視角魯棒性特徵對人體運動信息進行描述,併在每箇子視角空間下利用HMM建立各人體動作的模型,最終通過將多視角空間相應的動作模型似然概率加權融閤,實現對未知視角動作的識彆。利用多視角IXMAS動作識彆數據庫對該算法進行測試的實驗結果錶明,該算法實現簡單且對未知視角下的動作具有較好識彆結果。
침대일상생활중인체집행동작시존재시각변화이도치난이식별적문제,제출일충기우시각공간절분적다시각공간은마이가부모형( HMM)개솔융합적시각무관동작식별산법。해방법수선안조인체상대우섭상궤적선전방향장시각공간분할위다개자공간,연후선취흥취점시빈단사대특정여분구역적광류특정상융합,형성구유일정시각로봉성특정대인체운동신식진행묘술,병재매개자시각공간하이용HMM건립각인체동작적모형,최종통과장다시각공간상응적동작모형사연개솔가권융합,실현대미지시각동작적식별。이용다시각IXMAS동작식별수거고대해산법진행측시적실험결과표명,해산법실현간단차대미지시각하적동작구유교호식별결과。
It is difficult to recognize the human actions under view changes in daily living.In order to solve this problem, a novel multi-view space hidden Markov model algorithm for view-invariant action recognition based on view space partitioning is proposed in this paper.First, the whole view space is partitioned into multiple sub-view spaces according to the rotation direction of a person relative to camera.Next, a view-robust feature representation by combination of the bag of interest point words in shot length-based video and amplitude histogram of local optical flow is utilized for describing the information of human actions.Thereafter, the human action models in each sub-view space are trained by HMM algorithm.Finally, the unknown view action is recognized via the likelihood proba-bility weighted fusion of the corresponding action models in multi-view space.The experimental results on multi-view action recognition dataset IXMAS demonstrated that the proposed approach is easy to implement and has satis-factory performance for the unknown view action recognition.