心理科学
心理科學
심이과학
Psychological Science
2013年
1期
33~37
,共null页
情绪语音 面孔表情 跨通道 事件相关电位
情緒語音 麵孔錶情 跨通道 事件相關電位
정서어음 면공표정 과통도 사건상관전위
emotional voice, facial expression, cross-modal, event-related potential (ERP)
采用事件相关电位(ERP)技术考察了情绪语音影响面孔表情识别的时间进程。设置效价一致或不一致的“语音-面孔”对,要求被试判断情绪语音和面孔表情的效价是否一致。行为结果显示,被试对效价一致的“语音-面孔”对的反应更快。ERP结果显示,在70—130ms和220—450ms,不一致条件下的面孔表情比一致条件诱发了更负的波形;在450—750ms,不一致条件下的面孔表情比一致条件诱发更正的后正成分。说明情绪语音对面孔表情识别的多个阶段产生了跨通道影响。
採用事件相關電位(ERP)技術攷察瞭情緒語音影響麵孔錶情識彆的時間進程。設置效價一緻或不一緻的“語音-麵孔”對,要求被試判斷情緒語音和麵孔錶情的效價是否一緻。行為結果顯示,被試對效價一緻的“語音-麵孔”對的反應更快。ERP結果顯示,在70—130ms和220—450ms,不一緻條件下的麵孔錶情比一緻條件誘髮瞭更負的波形;在450—750ms,不一緻條件下的麵孔錶情比一緻條件誘髮更正的後正成分。說明情緒語音對麵孔錶情識彆的多箇階段產生瞭跨通道影響。
채용사건상관전위(ERP)기술고찰료정서어음영향면공표정식별적시간진정。설치효개일치혹불일치적“어음-면공”대,요구피시판단정서어음화면공표정적효개시부일치。행위결과현시,피시대효개일치적“어음-면공”대적반응경쾌。ERP결과현시,재70—130ms화220—450ms,불일치조건하적면공표정비일치조건유발료경부적파형;재450—750ms,불일치조건하적면공표정비일치조건유발경정적후정성분。설명정서어음대면공표정식별적다개계단산생료과통도영향。
Continuous integration of information from multiple sensory inputs is very important for the daily life of human beings. But the mechanisms underlying the interaction of cross-modal stimulus processing failed to draw sufficient attention, especially when it comes to the cross-modal interaction of the stimulus containing emotional significance. This study aimed to investigate the neural mechanism of the interaction of emotional voice and facial expression. Event-related potentials (ERP) technique and cross-modal priming paradigm were used to explore the influence of emotional voice on the recognition of facial expression. The materials consisted of 240 prime-target pairs using voices as primes and facial expressions as targets. Neutral semantic words were spoken with happy or angry prosody and followed by congruous or incongruous facial expressions. The participants were asked to judge the consistency between the valence of emotional voice and facial expression, during which ERPs were recorded. Each trial began with a central fixation cross presented for 500 ms. Then, the priming stimulus ( emotional voice) was presented through headphones. The central fixation cross displayed on the screen until the target (facial expression) was presented. The inter-stimulus-interval (ISI) is 1000 ms. The facial expression was presented for 500 ms, followed by a black screen for 2000-2200 ms. After the presentation of facial expression, participants were instructed to indicate the consistency of valance between the emotional voice and facial expression by pressing a mouse button as quickly and accurately as possible. The results were analyzed by repeated measures ANOVA. The response time (RT) results showed that participants responded more quickly to the congruous trials than the incongruous trials. It suggested the existence of the priming effect of emotional voice on recognition of emotional facial expression. The analysis of ERPs waveforms indicated that emotional voice modulated the time course of processing of facial expression. At the time window of 70-130 ms and 220-450 ms, facial expressions evoked more negative waveforms in incongruous trials than in congruous trials. At the time window of 450-750 ms, facial expressions evoked more positive late positive component (LPC) in incongruous trials than in congruous trials. The ERPs results suggested that emotional voice influenced the processing of emotional facial expression at the early perception stage, the emotional significance evaluation stage and the subsequent decision-making stage. This study demonstrates that emotional voice can influence the processing of facial expression in a cross-modal manner. Also, it provides converging evidence for the interaction of multi-sensory inputs.