One of the main tools used in autonomous mapping and navigation is a 3D Lidar. A 3D Lidar provides various advantages. It is not sensitive to light conditions, it can detect color through reflective channels, it has a complete 360 degree view of the environment and does not require any ”learning” to detect obstacles. One can use the reflective channel to detect the color of lanes as well as a regular 2D axis view to avoid obstacles. The pointcloud information from the Lidar can also easily enable mapping and localization as the vehicle will know where it is at all points. It is easy to see why so many large scale autonomous vehicle units invest in expensive and bulky Lidars. However, this is not accessible to all due to it’s price. A camera (even depth) is much more affordable. However, it comes with it’s own slew of disadvantages. It can see color but programming for the color is hard due to varying light conditions. Unless you use multiple cameras you often can’t see all around you. You must therefore work with lesser information. These factors together are a hindrance to autonomous mapping and navigation. We thus aim to resolve it through this project
Click on the image to learn more
In this section of the project, we are trying to create a map of an unknown environment so that the robot is able to know where it is at any point in time in this unknown environment
Click on the image to learn more
In this section we aim to be able to navigate autonomously. For that we use the images taken by the camera to find objects that need avoidance. We also use the lanes displayed by the image to stay within boundaries at all times
Click on the image to learn more
In this section, we aim to tune our camera to address its light sensitivity problems. Our tuning work involves both finding configurations that offset light intensity and use the best-tuned configuration to validate realtime performance