计算机技术与发展
計算機技術與髮展
계산궤기술여발전
COMPUTER TECHNOLOGY AND DEVELOPMENT
2015年
5期
13-16
,共4页
图像压缩%单隐层前向神经网络%极限学习机%Matlab仿真
圖像壓縮%單隱層前嚮神經網絡%極限學習機%Matlab倣真
도상압축%단은층전향신경망락%겁한학습궤%Matlab방진
image compression%single hidden layer feedforward neural network%extreme learning machine%Matlab simulation
神经网络具有并行分布处理、自学习、自适应和很强的鲁棒性及容错性等优点,已被广泛应用于图像压缩领域,为图像压缩提供了一个新途径。极限学习机是一种单隐层前向神经网络算法,与传统神经网络算法相比,具有学习速度快、泛化能力强等优点。文中旨在提出一种基于极限学习机的图像压缩算法。该算法主要利用极限学习机的非线性映射能力,对图像进行压缩编码和解码。首先利用极限学习机通过学习构建一个用于图像压缩的单隐层前向神经网络模型,其次利用该模型实现图像压缩和图像重建。实验结果表明,在相同压缩比下,所提算法的重建效果优于BP神经网络,并且具有较快的学习速度。
神經網絡具有併行分佈處理、自學習、自適應和很彊的魯棒性及容錯性等優點,已被廣汎應用于圖像壓縮領域,為圖像壓縮提供瞭一箇新途徑。極限學習機是一種單隱層前嚮神經網絡算法,與傳統神經網絡算法相比,具有學習速度快、汎化能力彊等優點。文中旨在提齣一種基于極限學習機的圖像壓縮算法。該算法主要利用極限學習機的非線性映射能力,對圖像進行壓縮編碼和解碼。首先利用極限學習機通過學習構建一箇用于圖像壓縮的單隱層前嚮神經網絡模型,其次利用該模型實現圖像壓縮和圖像重建。實驗結果錶明,在相同壓縮比下,所提算法的重建效果優于BP神經網絡,併且具有較快的學習速度。
신경망락구유병행분포처리、자학습、자괄응화흔강적로봉성급용착성등우점,이피엄범응용우도상압축영역,위도상압축제공료일개신도경。겁한학습궤시일충단은층전향신경망락산법,여전통신경망락산법상비,구유학습속도쾌、범화능력강등우점。문중지재제출일충기우겁한학습궤적도상압축산법。해산법주요이용겁한학습궤적비선성영사능력,대도상진행압축편마화해마。수선이용겁한학습궤통과학습구건일개용우도상압축적단은층전향신경망락모형,기차이용해모형실현도상압축화도상중건。실험결과표명,재상동압축비하,소제산법적중건효과우우BP신경망락,병차구유교쾌적학습속도。
With the advantages of parallel distributed processing,self-learning,self-adaption and strong robustness and fault tolerance, neural networks have been widely used in image compression,which provide a new approach to image compression. Extreme learning ma-chine is a single hidden layer feedforward neural network algorithm,and has faster learning speed and better generalization performance than traditional neural network algorithms. In this paper,aim at proposing an image compression algorithm based on extreme learning ma-chine. The algorithm achieves image compression coding and decoding with the nonlinear mapping capability of extreme learning ma-chine. Firstly,a single hidden layer feedforward neural network model for image compression is established through training the samples by using extreme learning machine. And then the model is used to compress and reconstruct image. The simulation results show that this algorithm has better reconstruction performance and faster learning speed than BP neural network.