This video is the accompanying video for the following paper: Huimin Lu, Junhao Xiao, Lilian Zhang, Shaowu Yang, Andreas Zell. Biologically Inspired Visual Odometry Based on the Computational Model of Grid Cells for Mobile Robots. Proceedings of the 2016 IEEE Conference on Robotics and Biomimetics, 2016.
Abstract: Visual odometry is a core component of many visual navigation systems like visual simultaneous localization and mapping (SLAM). Grid cells have been found as part of the path integration system in the rat's entorhinal cortex, and they provide inputs for place cells in the rat's hippocampus. Together with other cells, they constitute a positioning system in the brain. Some computational models of grid cells based on continuous attractor networks have also been proposed in the computational biology community, and using these models, self-motion information can be integrated to realize dead-reckoning. However, so far few researchers have tried to use these computational models of grid cells directly in robot visual navigation in the robotics community. In this paper, we propose to apply continuous attractor network model of grid cells to integrate the robot's motion information estimated from the vision system, so a biologically inspired visual odometry can be realized. The experimental results show that good dead-reckoning can be achieved for different mobile robots with very different motion velocities using our algorithm. We also implement a full visual SLAM system by simply combining the proposed visual odometry with a quite direct loop closure detection derived from the well-known RatSLAM, and comparable results can be achieved in comparison with RatSLAM.
Real-time Terrain Classification for Rescue Robot Based on Extreme Learning Machine
Yuhua Zhong, Junhao Xiao, Huimin Lu and Hui Zhang
Full autonomous robots in urban search and rescue (USAR) have to deal with complex terrains. The real-time recognition of terrains in front could effectively improve the ability of pass for rescue robots. This paper presents a real-time terrain classification system by using a 3D LIDAR on a custom designed rescue robot. Firstly, the LIDAR state estimation and point cloud registration are running in parallel to extract the test lane region. Secondly, normal aligned radial feature (NARF) is extracted and downscaled by a distance based weighting method. Finally, an extreme learning machine (ELM) classifier is designed to recognize the types of terrains. Experimental results demonstrate the effectiveness of the proposed system.
Video
The video can be found here if the below link does not work.
This video is the accompanying video for the paper: Yuxi Huang, Ming Lv, Dan Xiong, Shaowu Yang, Huimin Lu, An Object Following Method Based on Computational Geometry and PTAM for UAV in Unknown Environments. Proceedings of the 2016 IEEE International Conference on Information and Automation, 2016.
Abstract: This paper introduces an object following method based on the computational geometry and PTAM for Unmanned Aerial Vehicle(UAV) in unknown environments. Since the object is easy to move out of the field of view(FOV) of the camera, and it is difficult to make it back to the field of camera view just by relative attitude control, we propose a novel solution to re-find the object based on the visual simultaneous localization and mapping (SLAM) results by PTAM. We use a pad as the object which includes a letter H surrounded by a circle. We can get the 3D position of the center of the circle in camera coordinate system using the computational geometry. When the object moves out of the FOV of the camera, the Kalman filter is used to predict the object velocity, so the pad can be searched effectively. We demonstrate that the ambiguity of the pad's localization has little impact on object following through experiments. The experimental results also validate the effectiveness and efficiency of the proposed method.
This video is the accompanying video for the following paper: Weijia Yao, Zhiwen Zeng, Xiangke Wang, Huimin Lu, Zhiqiang Zheng. Distributed Encirclement Control with Arbitrary Spacing for Multiple Anonymous Mobile Robots. Proceedings of the 36th Chinese Control Conference, 2017.
Abstract: Encirclement control enables a multi-robot system to rotate around a target while they still preserve a circular formation, which is useful in real world applications such as entrapping a hostile target. In this paper, a distributed control law is proposed for any number of anonymous and oblivious robots in random three dimensional positions to form a specified circular formation with any desired inter-robot angular distances (i.e. spacing) and encircle around the target. Arbitrary spacing is useful for a system composed of heterogeneous robots which, for example, possess different kinematics capabilities, since the spacing can be designed manually for any specific purpose. The robots are modelled by single-integrator models, and they can only sense the angular positions of their two neighboring robots, so the control law is distributed. Theoretical analysis and simulation results are provided to prove the stability and effectiveness of the proposed control strategy.
This video is about the experimental results of the following paper: Xieyuanli Chen, Huimin Lu, Junhao Xiao, Hui Zhang, Pan Wang. Robust relocalization based on active loop closure for real-time monocular SLAM. Proceedings of the 11th International Conference on Computer Vision Systems (ICVS), 2017.
Abstract. Remarkable performance has been achieved using the state-of-the-art monocular Simultaneous Localization and Mapping (SLAM) algorithms. However, tracking failure is still a challenging problem during the monocular SLAM process, and it seems to be even inevitable when carrying out long-term SLAM in large-scale environments. In this paper, we propose an active loop closure based relocalization system, which enables the monocular SLAM to detect and recover from tracking failures automatically even in previously unvisited areas where no keyframe exists. We test our system by extensive experiments including using the most popular KITTI dataset, and our own dataset acquired by a hand-held camera in outdoor large-scale and indoor small-scale real-world environments where man-made shakes and interruptions were added. The experimental results show that the least recovery time (within 5ms) and the longest success distance (up to 46m) were achieved comparing to other relocalization systems. Furthermore, our system is more robust than others, as it can be used in different kinds of situations, i.e., tracking failures caused by the blur, sudden motion and occlusion. Besides robots or autonomous vehicles, our system can also be employed in other applications, like mobile phones, drones, etc.
This video is about the experimental results of the following paper: Xieyuanli Chen, Hui Zhang, Huimin Lu, Junhao Xiao, Qihang Qiu and Yi Li. Robust SLAM system based on monocular vision and LiDAR for robotic urban search and rescue. Proceedings of the 15th IEEE International Symposium on Safety, Security, and Rescue Robotics 2017 (SSRR 2017), Shanghai, 2017
Abstract. In this paper, we propose a monocular SLAM system for robotic urban search and rescue (USAR). Based on it, most USAR tasks (e.g. localization, mapping, exploration and object recognition) can be fulfilled by rescue robots with only a single camera. The proposed system can be a promising basis to implement fully autonomous rescue robots. However, the feature-based map built by the monocular SLAM is difficult for the operator to understand and use. We therefore combine the monocular SLAM with a 2D LiDAR SLAM to realize a 2D mapping and 6D localization SLAM system which can not only obtain a real scale of the environment and make the map more friendly to users, but also solve the problem that the robot pose cannot be tracked by the 2D LiDAR SLAM when the robot climbing stairs and ramps. We test our system using a real rescue robot in simulated disaster environments. The experimental results show that good performance can be achieved using the proposed system in the USAR. The system has also been successfully applied in the RoboCup Rescue Robot League (RRL) competitions, where our rescue robot team entered the top 5 and won the Best in Class Small Robot Mobility in 2016 RoboCup RRL Leipzig Germany, and the champions of 2016 and 2017 RoboCup China Open RRL.
Real-time Object Segmentation for Soccer Robots Based on Depth Images
Qiu Cheng, Shuijun Yu, Qinghua Yu and Junhao Xiao
Object detection and localization is a paramount important and challenging task in RoboCup MSL (Middle Size League). It has a strong constraint on real-time, as both the robot and obstacles (also robots) are moving quickly. In this paper, a real-time object segmentation approach is proposed, based on a RGB-D camera in which only the range information has been used. The method has four main steps, e.g., point cloud filtering, background points removing, clustering and object localization. Experimental results show that the proposed algorithm can effectively detect and segment objects in 3D space in real-time.
The video can be found here if the below link does not work.