农业工程学报
農業工程學報
농업공정학보
2015年
16期
206-212
,共7页
冯青春%赵春江%王晓楠%王秀%贡亮%刘成良
馮青春%趙春江%王曉楠%王秀%貢亮%劉成良
풍청춘%조춘강%왕효남%왕수%공량%류성량
机器人%图像处理%测量%樱桃番茄%视觉伺服%对靶测量%激光测距
機器人%圖像處理%測量%櫻桃番茄%視覺伺服%對靶測量%激光測距
궤기인%도상처리%측량%앵도번가%시각사복%대파측량%격광측거
robots%image processing%measurements%cherry tomato%visual servo%toward-target measuring%laser ranging
针对鲜食番茄自动化采收实际需要,为了实现对樱桃番茄果串自动识别定位,基于激光测距和视觉伺服技术设计了果串自动对靶测量视觉系统。通过分析成熟番茄果串图像色彩特征,采用R-G色差模型凸显目标与背景差异,并根据色差灰度逐列统计锁定果串图像区域;基于Cognex VisionPro图像处理类库CogPMAlignTool模板匹配工具,对果串区域内果粒进行分割;根据对边缘果粒空间坐标估算,同时对采摘机械臂进行视觉伺服控制,实现对边缘果粒对靶定位测量,并根据其空间坐标测算果串长宽特征,为采摘执行部件提供作业依据。试验结果表明,视觉系统对果串内果粒的平均识别率为83.5%,对果粒视觉对靶的平均偏差为8.38像素,果串长度测量平均误差为8.25 mm,果串宽度测量平均误差为5.25 mm。该研究结果为串形果实自动采收目标识别定位提供参考。
針對鮮食番茄自動化採收實際需要,為瞭實現對櫻桃番茄果串自動識彆定位,基于激光測距和視覺伺服技術設計瞭果串自動對靶測量視覺繫統。通過分析成熟番茄果串圖像色綵特徵,採用R-G色差模型凸顯目標與揹景差異,併根據色差灰度逐列統計鎖定果串圖像區域;基于Cognex VisionPro圖像處理類庫CogPMAlignTool模闆匹配工具,對果串區域內果粒進行分割;根據對邊緣果粒空間坐標估算,同時對採摘機械臂進行視覺伺服控製,實現對邊緣果粒對靶定位測量,併根據其空間坐標測算果串長寬特徵,為採摘執行部件提供作業依據。試驗結果錶明,視覺繫統對果串內果粒的平均識彆率為83.5%,對果粒視覺對靶的平均偏差為8.38像素,果串長度測量平均誤差為8.25 mm,果串寬度測量平均誤差為5.25 mm。該研究結果為串形果實自動採收目標識彆定位提供參攷。
침대선식번가자동화채수실제수요,위료실현대앵도번가과천자동식별정위,기우격광측거화시각사복기술설계료과천자동대파측량시각계통。통과분석성숙번가과천도상색채특정,채용R-G색차모형철현목표여배경차이,병근거색차회도축렬통계쇄정과천도상구역;기우Cognex VisionPro도상처리류고CogPMAlignTool모판필배공구,대과천구역내과립진행분할;근거대변연과립공간좌표고산,동시대채적궤계비진행시각사복공제,실현대변연과립대파정위측량,병근거기공간좌표측산과천장관특정,위채적집행부건제공작업의거。시험결과표명,시각계통대과천내과립적평균식별솔위83.5%,대과립시각대파적평균편차위8.38상소,과천장도측량평균오차위8.25 mm,과천관도측량평균오차위5.25 mm。해연구결과위천형과실자동채수목표식별정위제공삼고。
As the fresh cherry tomato, largely produced and consumed in China, needed an increasing manual-picking cost these years, the automatic harvesting machine was expected to replace the manual labor to implement the intensive work. Accurately identifying and locating the mature fruit bunches, was a key technique of harvesting robot research. The existing research could be classified into active and passive detecting method, among which the active method was becoming more mainstream. In this paper, a new vision system for cherry tomato’s automatic harvesting was designed based on laser ranging and vision servo, and the system included a camera, a laser sensor and a manipulator as the servo unit, among which the camera was fixed coaxially ahead of the laser sensor, and could slid up and down driven by a cylinder. When the camera slid down, the laser sensor was triggered to measure the distance between fruit and camera’s view system. Through analyzing the color feature of the image acquired from the camera, the R-G color model was adopted to intensify the difference between target fruit and background. According to the column pixel grey statistics, the candidate area of fruit bunch was selected from the R-G image, with a view of decreasing the whole image processing and improving recognition accuracy. Then the CogPMAlignTool contained in the Cognex VisionPro image processing classlib was used for the fruits’ identifying from the bunch, with the single fruit’s template scaling range (0.8, 1.2), rotating angle range (-π, π) and acceptable threshold 0.36. According to the image coordinate of the periphery fruits and the coordinate transformation between the camera and the manipulator, the stereo coordinate was estimated based on the camera imaging model, which was considered as the initial position for targeting the fruit based on the vision servo. The transition matrix between the camera and the manipulator was determined through the hand-eye calibration. According to the deviation between the fruit’s center and the image center, the base joint and the forearm joint were controlled to change the posture of the camera based on vision servo algorithm, so that the 2 centers in the fruit and the image could coincide approximatively. After aiming the fruit, the laser sensor was triggered to measure the distance between the vision system and the fruit, and the accurate coordinate of the periphery fruits could be obtained on the basis of the distance and the gesture of manipulator. Furthermore, the width and the length of the bunch were calculated after the coordinates of 4 periphery fruits were measured, which would be the necessary parameters to guide the robot’s grasper to case the bunch from the bottom up with the fruit bag and cut the stem. The test result showed that the average successful identification rate of single fruit from the bunch was 83.5%, and the rate would be better, if the fruit bunch had more regular shape, or the bunch stem was closer to the view center of vision unit;and through the vision servo control to aim the fruit center, the average deviation between the fruit center and the image center was 8.38 pixels. Finally, the measuring error of bunch length was 8.25 mm, and that of bunch width was 5.25 mm.