西北工业大学学报
西北工業大學學報
서북공업대학학보
JOURNAL OF NORTHWESTERN POLYTECHNICAL UNIVERSITY
2014年
3期
362-367
,共6页
噪声%算法%决策%特征提取%支持向量机%水声学%样本选择%目标识别%加权免疫克隆
譟聲%算法%決策%特徵提取%支持嚮量機%水聲學%樣本選擇%目標識彆%加權免疫剋隆
조성%산법%결책%특정제취%지지향량궤%수성학%양본선택%목표식별%가권면역극륭
acoustic noise%algorithms%decision making%feature extraction%support vector machines%underwater acoustics%instance selection%target recognition%weighted immune clone instance selection algorithm%weighted reduced nearest neighbor
水下目标识别中训练样本集含有冗余样本、噪声样本及无关样本,且特征提取、特征选择和决策系统设计过程分离而导致系统识别性能的下降,为此提出了基于加权最近邻收缩样本选择的SVM集成算法( SVME-WRNN)和基于加权免疫克隆样本选择的SVM集成算法( SVME-WICISA)。这2种集成方法通过样本选择来构建精度高、差异大的子分类器,并将其集成。利用4类水下目标实测数据进行了分类仿真实验。实验结果表明:SVME-WRNN算法和SVME-WICISA算法与SVME算法(无样本选择)相比较,在识别率相当的情况下,大幅度地降低了训练样本数目,得到的综合分类器具有良好的分类精度。
水下目標識彆中訓練樣本集含有冗餘樣本、譟聲樣本及無關樣本,且特徵提取、特徵選擇和決策繫統設計過程分離而導緻繫統識彆性能的下降,為此提齣瞭基于加權最近鄰收縮樣本選擇的SVM集成算法( SVME-WRNN)和基于加權免疫剋隆樣本選擇的SVM集成算法( SVME-WICISA)。這2種集成方法通過樣本選擇來構建精度高、差異大的子分類器,併將其集成。利用4類水下目標實測數據進行瞭分類倣真實驗。實驗結果錶明:SVME-WRNN算法和SVME-WICISA算法與SVME算法(無樣本選擇)相比較,在識彆率相噹的情況下,大幅度地降低瞭訓練樣本數目,得到的綜閤分類器具有良好的分類精度。
수하목표식별중훈련양본집함유용여양본、조성양본급무관양본,차특정제취、특정선택화결책계통설계과정분리이도치계통식별성능적하강,위차제출료기우가권최근린수축양본선택적SVM집성산법( SVME-WRNN)화기우가권면역극륭양본선택적SVM집성산법( SVME-WICISA)。저2충집성방법통과양본선택래구건정도고、차이대적자분류기,병장기집성。이용4류수하목표실측수거진행료분류방진실험。실험결과표명:SVME-WRNN산법화SVME-WICISA산법여SVME산법(무양본선택)상비교,재식별솔상당적정황하,대폭도지강저료훈련양본수목,득도적종합분류기구유량호적분류정도。
Because the training instance set for recognizing underwater acoustic targets contains many noise sam-ples, redundant samples and irrelevant samples, and because the systems for feature extraction, feature selection and decision making are designed separately, the underwater acoustic target recognition performance declines. Hence we propose the SVM ensemble based on weighted reduced nearest neighbor ( SVME-WRNN) and the SVM ensemble based on weighted immune clone instance selection algorithm(SVME-WICISA). The ensembles use in-stance selection to build precise and diverse sub-classifiers and then combine them. We simulate the classification of the measurement data of four types of underwater acoustic targets. The simulation results, given in Figs.3, 4 and 5 and Table 3, and their analysis show preliminarily that, compared with the SVME without instance selection, the two ensembles can greatly reduce the number of training instances when their classification accuracy is almost the same and that the combined classifier has satisfactory classification accuracy.