Kudan Visual SLAM (KdVisual) in Action: Multiple-camera SLAM
In this blog, we are excited to showcase one of KdVisual’s powerful features: “multiple-camera SLAM”.
Demo video link is here
There have been increasing demands for adopting visual SLAM for improving the positioning robustness and cost competitiveness for autonomous mobility applications, such as indoor/outdoor AMRs, forklifts, lawn mowers, slow mobility vehicles, and automotives. Thanks to the rich semantic information from camera images, visual-based positioning provides a good complement for 2D-Lidar or 3D-Lidar based localization, and it can also work independently for some use cases.
While the benefits are clear, visual SLAM also holds certain challenges, for example when camera views are obstructed by operators or cardboard boxes, or when a camera faces a plain white wall devoid of visual features.
Fortunately, we have developed multiple ways to mitigate these typical challenges of visual SLAM.
- Set up the cameras in a way that minimizes the risk of obstruction, such as positioning them to slightly upward tilted
- Improve redundancy through sensor fusion in Kudan SLAM by leveraging inputs from IMU, wheel odometry (and of course, 2D or 3D Lidar if it is available)
- Leverage the power of multiple cameras mounted on the robots, which significantly widens the field of view. This approach is what we refer to as the “Multiple-camera SLAM” method
In our demo video, we proudly present the multiple-camera SLAM feature of KdVisual. This feature utilizes inputs from multiples of stereo cameras (in the video, we used three units of Intel Realsense D455). Throughout the video, you’ll notice that at times, one of the cameras faces a plain white wall with very few visual feature points extracted from the images. Without sensor fusion and relying solely on this camera for visual SLAM, the robot’s position would be lost entirely. However, thanks to the other two cameras, which capture a wealth of visual features, the SLAM system successfully maintains accurate tracking in the video.
This feature truly shines when you want to enhance 2D-Lidar based autonomous mobile robots with better position tracking robustness, operational flexibility and efficiency. Since this enhances the robustness against scenery changes, there is less need to keep the map frequently updated with latest environmental information, and it contributes to the operational flexibility and efficiency in the commercial deployment phase as well. Many existing AMRs already come with multiple cameras used primarily for the object detection purpose, therefore AMR OEMs don’t need to design and build new hardware configurations, instead they can simply add the Kudan software in the stack to enjoy this powerful visual SLAM capability.
When it comes to the processing requirements for multiple-camera SLAM feature, KdVisual has its software pipeline optimized for very low resource consumption and lightweight processing. Our internal benchmark test shows that KdVisual can even run with 4 stereo cameras concurrently on a single Intel® N100 Processor with 20 fps (frame per second). This allows the system to be benefited by the robustness brought by KdVisual multiple-camera SLAM feature while still adopting a power-efficient and low-cost processing hardware.
Benchmark Test Setup:
CPU: Intel® Processor N100 (4 core, 4 thread / Intel 12th E-core)
Camera: 4 units of Intel Realsense D455
Resolution: 848 x 480
Frame Rate: 20fps
If you’re interested in trying out our multiple-camera SLAM feature and witnessing its capabilities firsthand, please don’t hesitate to reach out to us. We’re more than happy to assist you in optimizing your autonomous mobile robotics applications with our advanced KdVisual system.
■For more details, please contact us from here.