SLAM (simultaneous localization and mapping) is the computational problem seeking to provide a moving agent with real-time self-localization and a 3D map of the environment. SLAM also plays an important role in navigation, collision avoidance, etc.
Using cameras, visual SLAM system takes advantage of the rich information about the world available from images to recognize the environment. Visual SLAM is becoming popular and used in daily life, like Augmented Reality (AR). Base on the reconstructed map, visual SLAM can be categorized as either sparse or dense methods. From the perspective of the utilization of image information, it can also be classified as being either direct or indirect. Direct SLAM uses pixel intensity to supplement pose tracking and its reconstructed maps are usually quasi-dense. By contrast, indirect SLAM systems attempt to extract geometric features (i.e. SIFT, ORB), and then use these features to locate the camera and build the spare map, which usually has higher accuracy but lacks resilience to texture-less environments.
The project aims to implement a quasi-dense visual SLAM system by taking advantage of the strength of direct and indirect SLAM system which have complementary properties.
Learn and implement SLAM algorithms.
Deploy the algorithms to robotics platform
Students will be able to learn computer vision and SLAM algorithms and packages, e.g., multiview geometry, openCV, open source VSLAM, ORB-SLAM, etc.