How to Select the Best 3D Lidar for SLAM
How to Select the Best 3D Lidar for SLAM
What came to your eyes first when you looked at the image? If you couldn’t take them off the dark cylindrical item at the bottom left, you will certainly enjoy this article! Yes, this is a lidar, and in this article, we’ll be laying down the basics surrounding this sensor, which has been gaining a lot of popularity since the growth of multiple autonomous car and robot companies.
Xiang Gao and Tao Zhang’s definition of SLAM is our gold standard:
“Simultaneous Localization and Mapping usually refer to a robot or a moving rigid body, equipped with a specific sensor, that estimates its motion and builds a model of the surrounding environment, without a priori information .”
Based on the sensor used, we can have different kinds of SLAM. When a 3D lidar (standing for light detection and ranging) is used as the sensor, it’s known as 3D Lidar SLAM. In a nutshell, 3D Lidar SLAM compares the current point cloud the lidar provides and the existing point cloud based on past lidar frames to estimate where the lidar is and how it’s moving.
However, one of the most asked questions by people looking to start on 3D Lidar SLAM is: How should we decide on which lidar to choose for our use case?
Having built expertise over the years, we’ll uncover in this article all our knowledge and experiences with Lidar SLAM and on selecting the best-suited lidar for your use-cases. By the end of this article, you will understand the importance of each lidar feature from the SLAM point of view.
Let’s dive in.
Beam emitting/ steering mechanism and depth error: Are these important?
Maybe not. Hear us out for the details.
The beam emitting/ steering mechanism refers to how the lidars emit and steer the beams. Based on this, lidars can be broadly classified into two types.
1) Spinning lidar: It works by sending out lidar signals, then examining the radiation spectrum that’s backscattered.
2) Solid-State lidar: It uses a single laser beam to illuminate the scene in front of it and a time-of-flight sensor array to capture the 3D data that is returned.
Depth error is the error created when measuring the depth using the 3D Lidar, which is inevitable. As a rule of thumb, for most 3D Lidar, this value ranges between ±1cm to ±5cm. It rarely generates a noticeable difference for localization when the error value is within ±1cm to ±5cm if there is a sufficient FoV.
For calculating the sensor position, the SLAM system uses many points to align the current frame with the previous frame(s), so the error of each point is canceled out as a whole. However, these errors are crucial when it comes to creating a sharp point cloud for applications where the sharpness matters, such as survey applications, where even a 1cm error matters.
Generally, for most robotics applications, the FoV and the range (which we will cover next) are far more vital than a ±1cm to ±3cm depth error.
Field of view and the range: Crucial details you need to look for
Field of View (FoV) is the extent of the observable world seen at any given moment. The broader the lidar’s field of view, the more robust and accurate the SLAM performance you can expect up to some point.
For different use cases, it’s vital to consider the horizontal and vertical FoV that is required. Wider vertical FoV is crucial for indoor use-cases because it’s more likely that many objects with varying heights are present near the sensor. Without sufficient vertical FoV, the objects with differing heights might not be appropriately captured, leading to a poor SLAM performance.
Another kind of lidar, Solid-State Lidar (SSL), overcomes some limitations of spinning 3D Lidars, most noticeably its high cost, but results in a limited FoV. To overcome the limited FoV (typically around 1800), we can combine 2 or 3 SSLs to achieve a 3600 FoV . Extrinsic calibration would however need to be maintained amongst the Lidars that are being combined.
The range of the sensor is the maximum and minimum values of the applied parameters that can be measured. In most cases, the minimum value is not so critical for localization, and only the maximum value matters.
In general, the longer the range, the better it is for the performance of SLAM. However, there is not much of a difference between a lidar that offers a 100m range over one that offers 200m range for indoor use cases. When there are enough objects to detect within a range (say 100m), the performance uplift from a higher range (say 200m) would be marginal.
So based on your budget, we recommend selecting a lidar with higher FoV than going for a longer range one if your operating domain is indoors with sufficient objects to detect.
For outdoor use cases, a 50m range is generally sufficient except for use cases in open and large warehouses where higher ranges are required.
There is also an important fine print you want to look at while selecting the range of lidar: at what reflectivity and the probability of detection the range is measured.
Reflectivity indicates the likelihood of beams bouncing back from an object. Many road signs are coated with high reflectivity materials while the road surface is typically black and has low reflectivity. Of course, it’s easier to detect objects with a high reflectivity if they are at the same distance from a lidar. Probability of Detection (PoD) is the ratio between detected targets to that of all possible detectable objects. The nearer an object is placed from a lidar, the higher PoD it should get even with the same reflectivity. To sum up, the Lidar range can be much longer if it is measured with a higher reflectivity object at a lower PoD.
While some lidars depict the range at 10% reflectivity and 90% PoD, some other devices specify the range with 80% reflectivity and 50% PoD. In such scenarios, it would be difficult to directly compare the achievable ranges. The figure above demonstrates the varying ranges for different values of reflectivity and probability of detection.
Resolution and frame rate: Which specifications are best for your use case?
Resolution or number of channels indicates how many points the lidar can generate in one frame or per second.
For spinning lidars, the number of channels can indicate the resolution. That is, a 64 channels lidar has a higher resolution than a 32 channels lidar. SLAM performance deteriorates significantly when there is insufficient resolution.
However, after a certain level of sufficient resolution, the SLAM performance doesn’t show any significant improvement.
For instance, if we compare the performance of a lidar with 3600 horizontal FoV and a wide vertical FoV against a lidar with a significantly higher resolution and a limited FoV, the former is more likely to perform better. This is because having a high resolution doesn’t have a significant impact when there’s a limited FoV.
Frame rate is the frequency at which consecutive frames are captured every second. This specification behaves similarly to that of a camera for Visual SLAM.
The faster the movement you want to track, the higher the frame rate that is required. Having a higher frame rate enables the sensor to achieve a higher overlap between frames, improving the performance of SLAM.
As a rule of thumb, robotic applications require a frame rate of at least 10 Hz, while vehicle-based and hand-held applications require a higher frame rate due to the higher speeds and more dynamic movements.
3D Lidars have been widely adopted in robots, drones, digital twins, autonomous vehicles, etc. We’ve seen a rapid rise in use-cases in this field during the past 10+ years due to its performance improvements, more stability and reliability, and more affordable price. This also pushes 3D lidar SLAM adoption in these areas.
For that, we hope this article serves as a good starting point for your 3D Lidar SLAM journey. Should you require advice for your use case and requirement, we’ll be more than happy to help!
For a more comprehensive guide on how 3D lidar SLAM works, check out our ‘3D Lidar SLAM Basics’ article here.
 Xiang Gao and Tao Zhang. “Introduction to Visual SLAM” [PDF]
 A. Davison, I. Reid, N. Molton, and O. Stasse, “Monoslam: Real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052–1067, 2007 [PDF]
 Rosique, Francisca, Pedro J. Navarro, Carlos Fernández, and Antonio Padilla. 2019. “A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research” Sensors 19, no. 3: 648. [PDF]
 Wei, W., Shirinzadeh, B., Nowell, R., Ghafarian, M., Ammar, M., & Shen, T. (2021). Enhancing Solid-State Lidar Mapping with a 2D Spinning Lidar in Urban Scenario SLAM on Ground Vehicles. Sensors (Basel, Switzerland), 21(5), 1773. [PDF]
■For more details, please contact us from here.