This video is about the experimental results of the following paper: Xieyuanli Chen, Huimin Lu, Junhao Xiao, Hui Zhang, Pan Wang. Robust relocalization based on active loop closure for real-time monocular SLAM. Proceedings of the 11th International Conference on Computer Vision Systems (ICVS), 2017.
Abstract. Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, tracking failure is still a challenging problem during the monocular SLAM process, and it seems to be even inevitable when carrying out long-term SLAM in large-scale environments. In this paper, we propose an active loop closure based relocalization system, which enables the monocular SLAM to detect and recover from tracking failures automatically even in previously unvisited areas where no keyframe exists. We test our system by extensive experiments including using the most popular KITTI dataset, and our own dataset acquired by a hand-held camera in outdoor large-scale and indoor small-scale real-world environments where man-made shakes and interruptions were added. The experimental results show that the least recovery time (within 5ms) and the longest success distance (up to 46m) were achieved comparing to other relocalization systems. Furthermore, our system is more robust than others, as it can be used in different kinds of situations, i.e., tracking failures caused by the blur, sudden motion and occlusion. Besides robots or autonomous vehicles, our system can also be employed in other applications, like mobile phones, drones, etc.
This video is about the experimental results of the following paper: Xieyuanli Chen, Hui Zhang, Huimin Lu, Junhao Xiao, Qihang Qiu and Yi Li. Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue. Proceedings of the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017), Shanghai, 2017
Abstract. In this paper, we propose a monocular SLAM system for robotic urban search and rescue (USAR). Based on it, most USAR tasks (e.g. localization, mapping, exploration and object recognition) can be fulfilled by rescue robots with only a single camera. The proposed system can be a promising basis to implement fully autonomous rescue robots. However, the feature-based map built by the monocular SLAM is difficult for the operator to understand and use. We therefore combine the monocular SLAM with a 2D LiDAR SLAM to realize a 2D mapping and 6D localization SLAM system which can not only obtain a real scale of the environment and make the map more friendly to users, but also solve the problem that the robot pose cannot be tracked by the 2D LiDAR SLAM when the robot climbing stairs and ramps. We test our system using a real rescue robot in simulated disaster environments. The experimental results show that good performance can be achieved using the proposed system in the USAR. The system has also been successfully applied in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class Small Robot Mobility in 2016 RoboCup RRL Leipzig Germany, and the champions of 2016 and 2017 RoboCup China Open RRL.
Real-time Object Segmentation for Soccer Robots Based on Depth Images
Qiu Cheng, Shuijun Yu, Qinghua Yu and Junhao Xiao
Object detection and localization is a paramount important and challenging task in RoboCup MSL (Middle Size League). It has a strong constraint on real-time, as both the robot and obstacles (also robots) are moving quickly. In this paper, a real-time object segmentation approach is proposed, based on a RGB-D camera in which only the range information has been used. The method has four main steps, e.g., point cloud filtering, background points removing, clustering and object localization. Experimental results show that the proposed algorithm can effectively detect and segment objects in 3D space in real-time.
The video can be found here if the below link does not work.