TY - JOUR
T1 - Robust 3d model reconstruction based on continuous point cloud for autonomous vehicles
AU - Gao, Hongwei
AU - Yu, Jiahui
AU - Sun, Jian
AU - Yang, Wei
AU - Jiang, Yueqiu
AU - Zhu, Lei
AU - Ju, Zhaojie
N1 - Funding Information:
The authors would like to acknowledge support from the following projects: LiaoNing Province Higher Education Innovative Talents Program Support Project (Grant No. LR2019058); LiaoNing Province Joint Open Fund for Key Scientific and Technological Innovation Bases; LiaoNing Revitalization Talents Program (Grant No. XLYC1902095); Shenyang Institute of Automation, State Key Laboratory of Robotics Foundation (Liaoning Province Key Technology Innovation Base Joint Open Fund); National Natural Science Foundation of China (Grant Nos. 52075530, 51575412, 51575338, U1609218 and 51575407); CAS Inter-disciplinary Innovation Team (Grant No. JCTD-2018-11); AiBle project co-financed by the European Regional Development Fund.
Publisher Copyright:
© 2021 M Y U Scientific Publishing Division. All rights reserved.
PY - 2021/9/16
Y1 - 2021/9/16
N2 - Continuous point cloud stitching can reconstruct a 3D model and play an essential role in autonomous vehicles. However, most existing methods are based on binocular stereo vision, which increases space and material costs, and these systems also achieve poor matching accuracies and speeds. In this paper, a novel point cloud stitching method based on the monocular vision system is proposed to solve these problems. First, the calibration and parameter acquisition based on monocular vision are presented. Next, the region-growing algorithm in sparse matching and dense matching is redesigned to improve the matching density. Finally, an Iterative Closest Point (ICP)-based splicing method is proposed for monocular zoom stereo vision. The point cloud data are spliced by introducing the rotation matrix and translation factor obtained in the matching process. In the experiments, the proposed method is evaluated on two datasets: self-collected and public datasets. The results show that the proposed method achieves a higher matching accuracy than the binocular-based systems, and it also outperforms other recent approaches. In addition, the 3D model generated using this method has a wider viewing angle, a more precise outline, and more distinct layers than the state-of-the-art algorithms.
AB - Continuous point cloud stitching can reconstruct a 3D model and play an essential role in autonomous vehicles. However, most existing methods are based on binocular stereo vision, which increases space and material costs, and these systems also achieve poor matching accuracies and speeds. In this paper, a novel point cloud stitching method based on the monocular vision system is proposed to solve these problems. First, the calibration and parameter acquisition based on monocular vision are presented. Next, the region-growing algorithm in sparse matching and dense matching is redesigned to improve the matching density. Finally, an Iterative Closest Point (ICP)-based splicing method is proposed for monocular zoom stereo vision. The point cloud data are spliced by introducing the rotation matrix and translation factor obtained in the matching process. In the experiments, the proposed method is evaluated on two datasets: self-collected and public datasets. The results show that the proposed method achieves a higher matching accuracy than the binocular-based systems, and it also outperforms other recent approaches. In addition, the 3D model generated using this method has a wider viewing angle, a more precise outline, and more distinct layers than the state-of-the-art algorithms.
KW - dense 3d point cloud
KW - match optimization
KW - monocular zoom stereo vision
KW - region growing
UR - http://www.scopus.com/inward/record.url?scp=85116354359&partnerID=8YFLogxK
U2 - 10.18494/SAM.2021.3231
DO - 10.18494/SAM.2021.3231
M3 - Article
AN - SCOPUS:85116354359
SN - 0914-4935
VL - 33
SP - 3169
EP - 3186
JO - Sensors and Materials
JF - Sensors and Materials
IS - 9
ER -