电子与信息学报
電子與信息學報
전자여신식학보
JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY
2014年
6期
1355-1361
,共7页
压缩感知%稀疏重构%迭代重加权%稀疏贝叶斯学习%参数自动调整
壓縮感知%稀疏重構%迭代重加權%稀疏貝葉斯學習%參數自動調整
압축감지%희소중구%질대중가권%희소패협사학습%삼수자동조정
Compressive sensing%Sparse signal reconstruction%Iterative reweighting%Sparse Bayesian learning%Parameters adjust automatically
稀疏表示模型中的正则化参数由未知的噪声和稀疏度共同决定,该参数的设置直接影响稀疏重构性能的好坏。然而目前稀疏表示问题优化求解算法或依靠主观、或依靠相关先验信息、或经过实验设置该参数,均无法自适应地设置调整该参数。针对这一问题,该文提出一种无需先验信息的参数自动调整的稀疏贝叶斯学习算法。首先对模型中各参数进行概率建模,然后在贝叶斯学习的框架下将参数设置及稀疏求解问题转化为一系列混合L1范数与加权L2范数之和的凸优化问题,最终通过迭代优化得到参数设置和问题求解。由理论推导和仿真实验可知,已知理想参数时,该算法与其它非自动设置参数的迭代重加权算法性能相当,甚至更优;在理想参数未知时,该算法的重构性能要明显优于其它算法。
稀疏錶示模型中的正則化參數由未知的譟聲和稀疏度共同決定,該參數的設置直接影響稀疏重構性能的好壞。然而目前稀疏錶示問題優化求解算法或依靠主觀、或依靠相關先驗信息、或經過實驗設置該參數,均無法自適應地設置調整該參數。針對這一問題,該文提齣一種無需先驗信息的參數自動調整的稀疏貝葉斯學習算法。首先對模型中各參數進行概率建模,然後在貝葉斯學習的框架下將參數設置及稀疏求解問題轉化為一繫列混閤L1範數與加權L2範數之和的凸優化問題,最終通過迭代優化得到參數設置和問題求解。由理論推導和倣真實驗可知,已知理想參數時,該算法與其它非自動設置參數的迭代重加權算法性能相噹,甚至更優;在理想參數未知時,該算法的重構性能要明顯優于其它算法。
희소표시모형중적정칙화삼수유미지적조성화희소도공동결정,해삼수적설치직접영향희소중구성능적호배。연이목전희소표시문제우화구해산법혹의고주관、혹의고상관선험신식、혹경과실험설치해삼수,균무법자괄응지설치조정해삼수。침대저일문제,해문제출일충무수선험신식적삼수자동조정적희소패협사학습산법。수선대모형중각삼수진행개솔건모,연후재패협사학습적광가하장삼수설치급희소구해문제전화위일계렬혼합L1범수여가권L2범수지화적철우화문제,최종통과질대우화득도삼수설치화문제구해。유이론추도화방진실험가지,이지이상삼수시,해산법여기타비자동설치삼수적질대중가권산법성능상당,심지경우;재이상삼수미지시,해산법적중구성능요명현우우기타산법。
The regularization parameter of sparse representation model is determined by the unknown noise and sparsity. Meanwhile, it can directly affect the performances of sparsity reconstruction. However, the optimization algorithm of sparsity representation issue, which is solved with parameter setting by expert reasoning, priori knowledge or experiments, can not set the parameter adaptively. In order to solve the issue, the sparsity Bayesian learning algorithm which can set the parameter adaptively without priori knowledge is proposed. Firstly, the parameters in the model is constructed with the probability. Secondly, on the basis of the framework of Bayesian learning, the issue of parameter setting and sparsity resolving is transformed to the convex optimization issue which is the addition of a series of mixture L1 normal and the weighted L2 normal. Finally, the parameter setting and sparsity resolving are achieved by the iterative optimization. Theoretical analysis and simulations show that the proposed algorithm is competitive and even better compared with other parameter non-adjusted automatically iterative reweighted algorithms when ideal parameter is known, and the reconstruction performance of the proposed algorithm is significantly better than the other algorithms when choosing the non-ideal parameters.