Supercharge your 2D LiDAR ROS robot with Kudan Visual SLAM
Do you use 2D LiDAR with ROS on your robot for navigation?
Then you probably painfully feel one of the shortcomings of 2D LiDAR SLAM – relocalization.
For those that may not know the term, relocalization is the ability for a device to determine its location and pose within a mapped and known area, even if it doesn’t know how it got there.
In real life, this happens when a robot loses tracking for a period of time while it is moving, and also when robots are placed and turned on in arbitrary locations within a warehouse or factory. Since 2D LiDARs look at the world with a very narrow slice, it is difficult to uniquely identify the features it sees to know where it is.
In order to mitigate this, system managers will either tow the device back to a well known point manually, or place markers throughout the operating locations to maintain tracking in difficult areas. This causes significant loss in productivity, and brings up questions around the reliability and suitability of the product.
There is a trusted, reliable and cost efficient way to keep your existing system, but give it superpowers for relocalization with Kudan’s visual SLAM (vSLAM).
We created a demo to bring this to life.
Kudan SLAM: Supercharge your 2D LiDAR ROS robot with Kudan Visual SLAM
Let’s take a look at a small world example that illustrates the point.
We took a Turtlebot running ROS with 2D LiDAR, and added Kudan SLAM as a ROS navigation module.
Let’s look at a case without Kudan SLAM enabled.
We set up the robot, and manually provide it the initial pose, as required by the ACML module.
The robot works well, but we’ll force it to lose tracking. In real life, this could happen from unexpected bumps, and inclines in the path causing the LiDAR scan line to deviate from what was mapped, or sometimes unexpected physical obstacles.
As you can see, even though its position has changed, the robot doesn’t realizie this and continues on an incorrect position estimate. The consequences of such loss in tracking, can be a mild annoyance to a source of safety incidents.
Now, let’s see our Turtlebot with Kudan SLAM enabled.
First off, you will see that we don’t need to provide any initial pose information, as the visual SLAM system provides this. Similar to the previous case, we will force the device to lose tracking, and require a relocalization.
As you can see, the robot quickly regains its precise position, and resumes its task, avoiding unexpected obstacles along its route. The difference in experience is pretty mindblowing especially considering we simply added a stereo camera, and a Kudan’s vSLAM ROS module. I think that’s a testament to ROS as well. To be able to keep everything intact, and simply add on a vSLAM navigation module to make all the difference.