电子与信息学报
電子與信息學報
전자여신식학보
JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY
2015年
3期
708-714
,共7页
无线传感器网络%支持向量机%分布式学习%增广拉格朗日乘子法%平均一致性
無線傳感器網絡%支持嚮量機%分佈式學習%增廣拉格朗日乘子法%平均一緻性
무선전감기망락%지지향량궤%분포식학습%증엄랍격랑일승자법%평균일치성
Wireless Sensor Network (WSN)%Support Vector Machine (SVM)%Distributed learning%Augmented Lagrange multiplier method%Average consensus
针对无线传感器网络中分散在各节点上的训练数据传输到数据融合中心集中训练支持向量机(Support Vector Machine, SVM)时存在的高通信代价和高能量消耗问题,该文研究了仅依靠相邻节点间的相互协作,在网内分布式协同训练线性SVM的方法。首先,在各节点分类器决策变量与集中式分类器决策变量相一致的约束下,对集中式SVM训练问题进行等价分解,然后利用增广拉格朗日乘子法,对分解后的SVM问题进行求解和推导,进而提出基于全局平均一致性的线性SVM分布式训练算法(Average Consensus based Distributed Supported Vector Machine, AC-DSVM);为了降低AC-DSVM算法中全局平均一致性的通信开销,利用相邻节点间的局部平均一致性近似全局平均一致性,提出基于一次全局平均一致性的线性SVM分布式训练算法(Once Average Consensus based Distributed Supported Vector Machine,1-AC-DSVM)。仿真实验结果表明,与已有算法相比, AC-DSVM算法的迭代次数和数据传输量略高,但其能够完全收敛到集中式训练结果;1-AC-DSVM算法具有较好的收敛性,而且在收敛速度和数据传输量上也表现出显著优势。
針對無線傳感器網絡中分散在各節點上的訓練數據傳輸到數據融閤中心集中訓練支持嚮量機(Support Vector Machine, SVM)時存在的高通信代價和高能量消耗問題,該文研究瞭僅依靠相鄰節點間的相互協作,在網內分佈式協同訓練線性SVM的方法。首先,在各節點分類器決策變量與集中式分類器決策變量相一緻的約束下,對集中式SVM訓練問題進行等價分解,然後利用增廣拉格朗日乘子法,對分解後的SVM問題進行求解和推導,進而提齣基于全跼平均一緻性的線性SVM分佈式訓練算法(Average Consensus based Distributed Supported Vector Machine, AC-DSVM);為瞭降低AC-DSVM算法中全跼平均一緻性的通信開銷,利用相鄰節點間的跼部平均一緻性近似全跼平均一緻性,提齣基于一次全跼平均一緻性的線性SVM分佈式訓練算法(Once Average Consensus based Distributed Supported Vector Machine,1-AC-DSVM)。倣真實驗結果錶明,與已有算法相比, AC-DSVM算法的迭代次數和數據傳輸量略高,但其能夠完全收斂到集中式訓練結果;1-AC-DSVM算法具有較好的收斂性,而且在收斂速度和數據傳輸量上也錶現齣顯著優勢。
침대무선전감기망락중분산재각절점상적훈련수거전수도수거융합중심집중훈련지지향량궤(Support Vector Machine, SVM)시존재적고통신대개화고능량소모문제,해문연구료부의고상린절점간적상호협작,재망내분포식협동훈련선성SVM적방법。수선,재각절점분류기결책변량여집중식분류기결책변량상일치적약속하,대집중식SVM훈련문제진행등개분해,연후이용증엄랍격랑일승자법,대분해후적SVM문제진행구해화추도,진이제출기우전국평균일치성적선성SVM분포식훈련산법(Average Consensus based Distributed Supported Vector Machine, AC-DSVM);위료강저AC-DSVM산법중전국평균일치성적통신개소,이용상린절점간적국부평균일치성근사전국평균일치성,제출기우일차전국평균일치성적선성SVM분포식훈련산법(Once Average Consensus based Distributed Supported Vector Machine,1-AC-DSVM)。방진실험결과표명,여이유산법상비, AC-DSVM산법적질대차수화수거전수량략고,단기능구완전수렴도집중식훈련결과;1-AC-DSVM산법구유교호적수렴성,이차재수렴속도화수거전수량상야표현출현저우세。
In Wireless Sensor Network (WSN), transferring all training samples distributed across different nodes to a centralized fusion center for training Support Vector Machine (SVM) significantly increases the communication overhead and energy consumption. Therefore, this paper studies the distributed training approach for linear SVM through the collaboration of neighboring nodes within the networks. First, the centralized linear SVM problem is cast as the solution of coupled decentralized convex optimization sub-problems with consensus constraints on the classifier parameters. Second, the distributed linear SVM problem is solved and derived using the augmented Lagrange multipliers method, and a novel distributed training algorithm, called Average Consensus based Distributed Supported Vector Machine (AC-DSVM), is proposed. To decrease the communication overhead of global average consensus, an improved distributed training algorithm,named Once Average Consensus based Distributed Supported Vector Machine (1-AC-DSVM), is presented, which is only based on once global average consensus. Simulation results show that compared with existing algorithms, AC-DSVM has slightly higher iterations and data traffic, but can converge to the centralized training results; 1-AC-DSVM not only has better convergence, but also has remarkable advantage in convergence speed and data traffic.