This video is about the experimental results of the following paper: Yi Li, Chenggang Xie, Huimin Lu, Xieyuanli Chen, Junhao Xiao and Hui Zhang. Scale-aware Monocular SLAM Based on Convolutional Neural Network. Proceedings of the 15th IEEE International Conference on Information and Automation 2018 ( ICIA 2018 ), Mount Wuyi, 2018.
Abstract—Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, due to the scale ambiguity limitation of monocular vision, the existing monocular SLAM systems can not directly restore the absolute scale in unknown environments. Given the amazing results in the field of depth estimation from Convolutional Neural Networks (CNNs), we propose a CNN-based monocular SLAM, where we naturally combine the CNN-predicted depth maps together with the monocular ORB-SLAM, overcoming the scale ambiguity limitation of the monocular SLAM. We test our method using the popular KITTI odometry benchmark, and the experimental results show that the overall performance of average translational and rotational error can reach 2.00% and 0.0051º/m. In addition, our approach can work well under the pure rotation motion, which shows the robustness and high accuracy of the proposed algorithm.