软件学报
軟件學報
연건학보
JOURNAL OF SOFTWARE
2013年
11期
2584-2596
,共13页
曹莹%苗启广%刘家辰%高琳
曹瑩%苗啟廣%劉傢辰%高琳
조형%묘계엄%류가신%고림
代价敏感学习%贝叶斯决策%Fisher一致性%AdaBoost%二分类
代價敏感學習%貝葉斯決策%Fisher一緻性%AdaBoost%二分類
대개민감학습%패협사결책%Fisher일치성%AdaBoost%이분류
cost sensitive learning%Bayes decision%Fisher consistent%AdaBoost%binary classification
AdaBoost 是一种重要的集成学习元算法,算法最核心的特性“Boosting”也是解决代价敏感学习问题的有效方法.然而,各种代价敏感 Boosting 算法,如 AdaCost、AdaC 系列算法、CSB 系列算法等采用启发式策略,向AdaBoost 算法的加权投票因子计算公式或权值调整策略中加入代价参数,迫使算法聚焦于高代价样本.然而,这些启发式策略没有经过理论分析的验证,对原算法的调整破坏了 AdaBoost 算法最重要的 Boosting 特性。AdaBoost算法收敛于贝叶斯决策,与之相比,这些代价敏感 Boosting 并不能收敛到代价敏感的贝叶斯决策.针对这一问题,研究严格遵循Boosting理论框架的代价敏感Boosting算法.首先,对分类间隔的指数损失函数以及Logit损失函数进行代价敏感改造,可以证明新的损失函数具有代价意义下的Fisher一致性,在理想情况下,优化这些损失函数最终收敛到代价敏感贝叶斯决策;其次,在 Boosting 框架下使用函数空间梯度下降方法优化新的损失函数得到算法 AsyB以及AsyBL.二维高斯人工数据上的实验结果表明,与现有代价敏感Boosting算法相比,AsyB和AsyBL算法能够有效逼近代价敏感贝叶斯决策;UCI数据集上的测试结果也进一步验证了AsyB以及AsyBL算法能够生成有更低错分类代价的代价敏感分类器,并且错分类代价随迭代呈指数下降.
AdaBoost 是一種重要的集成學習元算法,算法最覈心的特性“Boosting”也是解決代價敏感學習問題的有效方法.然而,各種代價敏感 Boosting 算法,如 AdaCost、AdaC 繫列算法、CSB 繫列算法等採用啟髮式策略,嚮AdaBoost 算法的加權投票因子計算公式或權值調整策略中加入代價參數,迫使算法聚焦于高代價樣本.然而,這些啟髮式策略沒有經過理論分析的驗證,對原算法的調整破壞瞭 AdaBoost 算法最重要的 Boosting 特性。AdaBoost算法收斂于貝葉斯決策,與之相比,這些代價敏感 Boosting 併不能收斂到代價敏感的貝葉斯決策.針對這一問題,研究嚴格遵循Boosting理論框架的代價敏感Boosting算法.首先,對分類間隔的指數損失函數以及Logit損失函數進行代價敏感改造,可以證明新的損失函數具有代價意義下的Fisher一緻性,在理想情況下,優化這些損失函數最終收斂到代價敏感貝葉斯決策;其次,在 Boosting 框架下使用函數空間梯度下降方法優化新的損失函數得到算法 AsyB以及AsyBL.二維高斯人工數據上的實驗結果錶明,與現有代價敏感Boosting算法相比,AsyB和AsyBL算法能夠有效逼近代價敏感貝葉斯決策;UCI數據集上的測試結果也進一步驗證瞭AsyB以及AsyBL算法能夠生成有更低錯分類代價的代價敏感分類器,併且錯分類代價隨迭代呈指數下降.
AdaBoost 시일충중요적집성학습원산법,산법최핵심적특성“Boosting”야시해결대개민감학습문제적유효방법.연이,각충대개민감 Boosting 산법,여 AdaCost、AdaC 계렬산법、CSB 계렬산법등채용계발식책략,향AdaBoost 산법적가권투표인자계산공식혹권치조정책략중가입대개삼수,박사산법취초우고대개양본.연이,저사계발식책략몰유경과이론분석적험증,대원산법적조정파배료 AdaBoost 산법최중요적 Boosting 특성。AdaBoost산법수렴우패협사결책,여지상비,저사대개민감 Boosting 병불능수렴도대개민감적패협사결책.침대저일문제,연구엄격준순Boosting이론광가적대개민감Boosting산법.수선,대분류간격적지수손실함수이급Logit손실함수진행대개민감개조,가이증명신적손실함수구유대개의의하적Fisher일치성,재이상정황하,우화저사손실함수최종수렴도대개민감패협사결책;기차,재 Boosting 광가하사용함수공간제도하강방법우화신적손실함수득도산법 AsyB이급AsyBL.이유고사인공수거상적실험결과표명,여현유대개민감Boosting산법상비,AsyB화AsyBL산법능구유효핍근대개민감패협사결책;UCI수거집상적측시결과야진일보험증료AsyB이급AsyBL산법능구생성유경저착분류대개적대개민감분류기,병차착분류대개수질대정지수하강.
AdaBoost is a meta ensemble learning algorithm. The most important theoretical property behind it is“Boosting”, which also plays an important role in cost sensitive learning. However, available cost sensitive Boosting algorithms, such as AdaCost, AdaC1, AdaC2, AdaC3, CSB0, CSB1 and CSB2, are just heuristic. They add cost parameters into voting weight calculation formula or sample weights updating strategy of AdaBoost, so that the algorithms are forced to focus on samples with higher misclassification costs. However, these heuristic modifications have no theoretical foundations. The worst thing is that they break the most important theoretical property of AdaBoost, namely“Boosting”. Compared to AdaBoost which converges to optimal Bayes decision rule, those cost sensitive algorithms do not converge to cost sensitive decision rule. This paper studies the problem of designing cost sensitive Boosting algorithms strictly under Boosting theory. First, two new loss functions are constructed by making exponential loss and logit loss cost sensitive. It can be proved that the new loss functions are Fisher consistent in cost sensitive setting, therefore optimizing them finally leads to cost sensitive Bayes decision rule. Performing gradient decent in functional space to optimize these two loss functions then results in new cost sensitive Boosting algorithms:AsyB and AsyBL. Experimental results on synthetic Gaussian data prove that in comparison with other cost sensitive Boosting algorithms, AsyB and AsyBL always better approximate cost sensitive Bayes decision rule. Experimental results on UCI datasets further prove that AsyB and AsyBL generate better cost sensitive classifiers with lower misclassification costs and the misclassification costs decrease exponentially with iterations.