计算机学报
計算機學報
계산궤학보
CHINESE JOURNAL OF COMPUTERS
2010年
2期
279-287
,共9页
邓万宇%郑庆华%陈琳%许学斌
鄧萬宇%鄭慶華%陳琳%許學斌
산만우%정경화%진림%허학빈
极速学习机%正则极速学习机%支持向量机%结构风险%神经网络%最小二乘
極速學習機%正則極速學習機%支持嚮量機%結構風險%神經網絡%最小二乘
겁속학습궤%정칙겁속학습궤%지지향량궤%결구풍험%신경망락%최소이승
extreme learning machine%regularized extreme learning machine%support vector ma-chine%structural risk%neural network%least square
单隐藏层前馈神经网络(Single-hidden Layer Feedforward Neural Network,SLFN)已经在模式识别、自动控制及数据挖掘等领域取得了广泛的应用,但传统学习方法的速度远远不能满足实际的需要,成为制约其发展的主要瓶颈.产生这种情况的两个主要原因是:(1)传统的误差反向传播方法(Back Propagation,BP)主要基于梯度下降的思想,需要多次迭代;(2)网络的所有参数都需要在训练过程中迭代确定.因此算法的计算量和搜索空间很大.针对以上问题,借鉴ELM的一次学习思想并基于结构风险最小化理论提出一种快速学习方法(RELM),避免了多次迭代和局部最小值,具有良好的泛化性、鲁棒性与可控性.实验表明RELM综合性能优于ELM、BP和SVM.
單隱藏層前饋神經網絡(Single-hidden Layer Feedforward Neural Network,SLFN)已經在模式識彆、自動控製及數據挖掘等領域取得瞭廣汎的應用,但傳統學習方法的速度遠遠不能滿足實際的需要,成為製約其髮展的主要瓶頸.產生這種情況的兩箇主要原因是:(1)傳統的誤差反嚮傳播方法(Back Propagation,BP)主要基于梯度下降的思想,需要多次迭代;(2)網絡的所有參數都需要在訓練過程中迭代確定.因此算法的計算量和搜索空間很大.針對以上問題,藉鑒ELM的一次學習思想併基于結構風險最小化理論提齣一種快速學習方法(RELM),避免瞭多次迭代和跼部最小值,具有良好的汎化性、魯棒性與可控性.實驗錶明RELM綜閤性能優于ELM、BP和SVM.
단은장층전궤신경망락(Single-hidden Layer Feedforward Neural Network,SLFN)이경재모식식별、자동공제급수거알굴등영역취득료엄범적응용,단전통학습방법적속도원원불능만족실제적수요,성위제약기발전적주요병경.산생저충정황적량개주요원인시:(1)전통적오차반향전파방법(Back Propagation,BP)주요기우제도하강적사상,수요다차질대;(2)망락적소유삼수도수요재훈련과정중질대학정.인차산법적계산량화수색공간흔대.침대이상문제,차감ELM적일차학습사상병기우결구풍험최소화이론제출일충쾌속학습방법(RELM),피면료다차질대화국부최소치,구유량호적범화성、로봉성여가공성.실험표명RELM종합성능우우ELM、BP화SVM.
SLFNs(Single-hidden Layer Feed forward Neural networks) have been widely applied in many fields including pattern recognition,automatic control,data mining etc.However,the traditional learning methods can not meet the actual needs due to two main reasons.Firstly,the traditional method is mainly based on gradient descent and it needs multiple iterations.Secondly,all of the network parameters need to be determined by iteration.Therefore,the computational complexity and searching space will increase dramatically.To solve the above problem,motivated by ELM's one-time learning idea,a novel algorithm called Regularized Extreme Learning Ma-chine (RELM) based on structural risk minimization and weighted least square is proposed in this paper.The algorithm not only avoids a number of iterations and the local minimum,but also has better generalization,robustness and controllability than the original ELM.Additionally,experi-mental results have shown that RELM' overall performance is also better than BP and SVM.