农业工程学报
農業工程學報
농업공정학보
2015年
7期
173-179
,共7页
姬长英%沈子尧%顾宝兴%田光兆%张杰
姬長英%瀋子堯%顧寶興%田光兆%張傑
희장영%침자요%고보흥%전광조%장걸
机器人%算法%机器视觉%障碍物检测%点云图%点云密度
機器人%算法%機器視覺%障礙物檢測%點雲圖%點雲密度
궤기인%산법%궤기시각%장애물검측%점운도%점운밀도
robots%algorithms%computer vision%obstacle detection%point clouds%density grid
为满足智能农业机器人路径规划中障碍物检测的需求,针对传统双目视觉中用于障碍物检测算法的局限性,提出基于点云图的障碍物距离与尺寸的检测方法。该方法以双目视觉中以立体匹配得到的点云图为对象,通过设置有效空间,对不同区域处点云密度的统计,找到点云密度随距离的衰减曲线。远距离障碍物由于相机分辨率的不足,点云密度会随距离下降,通过密度补偿算法进行补偿,经二次设置有效空间后锁定障碍物位置,将目标点云分别投影于俯视栅格图和正视图中,获得其距离和尺寸信息。试验表明:该方法能有效还原障碍物信息,最大测距范围为28 m,平均误差为2.43%;最大尺寸检测范围为10 m,长度和高度平均误差均小于3%。该文基于点云图的栅格化表示和密度补偿算法,通过设置有效空间将点云投影得到障碍物距离和尺寸,不同环境下的精度测试和距离检测验证了可靠性和鲁棒性。
為滿足智能農業機器人路徑規劃中障礙物檢測的需求,針對傳統雙目視覺中用于障礙物檢測算法的跼限性,提齣基于點雲圖的障礙物距離與呎吋的檢測方法。該方法以雙目視覺中以立體匹配得到的點雲圖為對象,通過設置有效空間,對不同區域處點雲密度的統計,找到點雲密度隨距離的衰減麯線。遠距離障礙物由于相機分辨率的不足,點雲密度會隨距離下降,通過密度補償算法進行補償,經二次設置有效空間後鎖定障礙物位置,將目標點雲分彆投影于俯視柵格圖和正視圖中,穫得其距離和呎吋信息。試驗錶明:該方法能有效還原障礙物信息,最大測距範圍為28 m,平均誤差為2.43%;最大呎吋檢測範圍為10 m,長度和高度平均誤差均小于3%。該文基于點雲圖的柵格化錶示和密度補償算法,通過設置有效空間將點雲投影得到障礙物距離和呎吋,不同環境下的精度測試和距離檢測驗證瞭可靠性和魯棒性。
위만족지능농업궤기인로경규화중장애물검측적수구,침대전통쌍목시각중용우장애물검측산법적국한성,제출기우점운도적장애물거리여척촌적검측방법。해방법이쌍목시각중이입체필배득도적점운도위대상,통과설치유효공간,대불동구역처점운밀도적통계,조도점운밀도수거리적쇠감곡선。원거리장애물유우상궤분변솔적불족,점운밀도회수거리하강,통과밀도보상산법진행보상,경이차설치유효공간후쇄정장애물위치,장목표점운분별투영우부시책격도화정시도중,획득기거리화척촌신식。시험표명:해방법능유효환원장애물신식,최대측거범위위28 m,평균오차위2.43%;최대척촌검측범위위10 m,장도화고도평균오차균소우3%。해문기우점운도적책격화표시화밀도보상산법,통과설치유효공간장점운투영득도장애물거리화척촌,불동배경하적정도측시화거리검측험증료가고성화로봉성。
Various sensors are utilized to ensure safety in the vicinity of an intelligent robot while it is walking, with the development and wide application of intelligent navigation. Among them, stereo cameras are prevalent in recent years due to their capability of distance measurement. Due to the limitation of traditional detection methods in stereo vision, a method based on point clouds was presented to meet the demand of obstacle detection on a robot while planning its path. Point clouds of an environment’s space with an obstacle involved was taken as the object, and a validity box was applied into the space to eliminate point clouds in irrelevant regions. From a top view, the area was divided into grids, and the number of points in each grid was denoted as the density of point clouds. Once the point cloud density in different ranges was calculated, a curve that fitted the distribution of density, decreasing due to the range was figured out. It indicated that obstacles far from the camera created fewer point clouds than the close ones, leading to a mismatch and the recognition of obstacles. In order to compensate for the sparse point clouds of obstacles in the long range view, the density was compensated for according to a curve of the density-range relationship. And obstacles were able to be recognized by setting a threshold, so the distance could be measured. The specific space that an obstacle occupied was confirmed by setting one more validity box. Therefore, its shape and size could be measured after projecting it onto a front view. Experiments were conducted on the campus of Nanjing Agricultural University, Nanjing, Jiangsu Province, and source codes were programmed and compiled by Matlab software. Experimental results showed that this method could restore the obstacle information of point clouds effectively, while in the distance measurement test, it showed a maximum detection range of 28 m and an average error of 2.43%. Several experiments in various environments and weather were conducted as well, which indicated its robust performance with illumination changing. While in the size measurement tests, it showed a maximum range of 10 meters and the average error in length and height were 2.59% and 2.01%, respectively. In summary, this article was based on a density map of a point cloud and density-compensation algorithm to measure both the distance and the size of obstacles in the vicinity of the camera. Unlike conventional image processing methods, it converted three dimensional point cloud data to two dimensional, using the density of point clouds and applying a grid map to significantly decrease the calculation amount. Additionally, it functioned well under different weather conditions, in both an indoor and outdoor environment, showing a robustness over traditional methods that separate obstacles from the background in image processing. Because there were still some deficiencies to be improved, the current method and programming platform are still too time-consuming to fulfill the demands of real-time detection, but this appears to be a fruitful approach for future study in this field.