软件学报
軟件學報
연건학보
JOURNAL OF SOFTWARE
2014年
9期
2149-2159
,共11页
核方法%支持向量学习%模型选择%参数调节%序贯无约束极小化技术
覈方法%支持嚮量學習%模型選擇%參數調節%序貫無約束極小化技術
핵방법%지지향량학습%모형선택%삼수조절%서관무약속겁소화기술
kernel method%support vector learning%model selection%parameter tuning%SUMT(sequential unconstrained minimization technique)
模型选择是支持向量学习的关键问题。已有模型选择方法采用嵌套的双层优化框架,内层执行支持向量学习,外层通过最小化泛化误差的估计进行模型选择。该框架过程复杂,计算效率低。简化传统的双层优化框架,提出一个支持向量学习的多参数同时调节方法,在同一优化过程中实现模型选择和学习器训练。首先,将支持向量学习中的参数和超参数合并为一个参数向量,利用序贯无约束极小化技术(sequential unconstrained minimization technique,简称SUMT)分别改写支持向量分类和回归的有约束优化问题,得到多参数同时调节模型的多元无约束形式定义;然后,证明多参数同时调节模型目标函数的局部 Lipschitz 连续性及水平集有界性。在此基础上,应用变尺度方法(variable metric method,简称VMM)设计并实现了多参数同时调节算法。进一步地,基于多参数同时调节模型的性质,证明了算法收敛性,对比分析了算法复杂性。最后,实验验证同时调节算法的收敛性,并实验对比同时调节算法的有效性。理论证明和实验分析表明,同时调节方法是一种坚实、高效的支持向量模型选择方法。
模型選擇是支持嚮量學習的關鍵問題。已有模型選擇方法採用嵌套的雙層優化框架,內層執行支持嚮量學習,外層通過最小化汎化誤差的估計進行模型選擇。該框架過程複雜,計算效率低。簡化傳統的雙層優化框架,提齣一箇支持嚮量學習的多參數同時調節方法,在同一優化過程中實現模型選擇和學習器訓練。首先,將支持嚮量學習中的參數和超參數閤併為一箇參數嚮量,利用序貫無約束極小化技術(sequential unconstrained minimization technique,簡稱SUMT)分彆改寫支持嚮量分類和迴歸的有約束優化問題,得到多參數同時調節模型的多元無約束形式定義;然後,證明多參數同時調節模型目標函數的跼部 Lipschitz 連續性及水平集有界性。在此基礎上,應用變呎度方法(variable metric method,簡稱VMM)設計併實現瞭多參數同時調節算法。進一步地,基于多參數同時調節模型的性質,證明瞭算法收斂性,對比分析瞭算法複雜性。最後,實驗驗證同時調節算法的收斂性,併實驗對比同時調節算法的有效性。理論證明和實驗分析錶明,同時調節方法是一種堅實、高效的支持嚮量模型選擇方法。
모형선택시지지향량학습적관건문제。이유모형선택방법채용감투적쌍층우화광가,내층집행지지향량학습,외층통과최소화범화오차적고계진행모형선택。해광가과정복잡,계산효솔저。간화전통적쌍층우화광가,제출일개지지향량학습적다삼수동시조절방법,재동일우화과정중실현모형선택화학습기훈련。수선,장지지향량학습중적삼수화초삼수합병위일개삼수향량,이용서관무약속겁소화기술(sequential unconstrained minimization technique,간칭SUMT)분별개사지지향량분류화회귀적유약속우화문제,득도다삼수동시조절모형적다원무약속형식정의;연후,증명다삼수동시조절모형목표함수적국부 Lipschitz 련속성급수평집유계성。재차기출상,응용변척도방법(variable metric method,간칭VMM)설계병실현료다삼수동시조절산법。진일보지,기우다삼수동시조절모형적성질,증명료산법수렴성,대비분석료산법복잡성。최후,실험험증동시조절산법적수렴성,병실험대비동시조절산법적유효성。이론증명화실험분석표명,동시조절방법시일충견실、고효적지지향량모형선택방법。
Model selection is critical to support vector learning. Previous model selection methods mainly adopt a nested two-layer framework, where the inner layer trains the learner and the outer one conducts model selection by minimizing the estimate of the generalization error. Breaking from this framework, this paper proposes an approach of simultaneously tuning multiple parameters of support vector learning, which integrates model selection and learning into one optimization process. It first combines the parameters and hyperparameters involved in support vector learning into one parameter vector. Then, using sequential unconstrained minimization technique (SUMT), it reformulates the constrained optimization problems for support vector classification (SVC) and support vector regression (SVR) as unconstrained optimization problems to give the simultaneous tuning model of SVC and SVR. In addition, it proves the basic properties of the simultaneous tuning model of SVC and SVR, including the local Lipschitz continuity and the boundedness of their level sets. Further, it develops a simultaneous tuning algorithm to iteratively solve simultaneous tuning model. Finally, it proves the convergence of the developed algorithm based on the basic properties of the simultaneous tuning model and provides analysis on complexity of the algorithm as compared with related approaches. The empirical evaluation on benchmark datasets shows that the proposed simultaneous approach has lower running time complexity and exhibits similar predictive performance as existing approaches.Theoretical and experimental results demonstrate that the simultaneous tuning approach is a sound and efficient model selection approach for support vector learning.