© 2019 Kudan. All Rights Reserved.

  • Kudan

SLAM Accuracy – Things to be aware of

SLAM accuracy isn't constant. It varies depending on:

  • Camera resolution: lower resolution cameras provide less accuracy when making 2D observations of points.

  • Camera field of view: wider fields of view provide more environment to track, but also push objects away, having a similar impact to low resolution.

  • Accuracy of camera calibration: errors in intrinsics and extrinsics will reduce the overall accuracy of SLAM. Cameras are modeled as pinhole but in the real-world they’re only approximations.

  • Stereo baseline: depth is measured as pixel disparity. Large baselines result in large disparities which provide accurate depth estimates, whereas smaller baselines could easily result in only a few pixels of disparity which becomes much more difficult to measure.

  • Distance to points being tracked: estimating a pose using only observations of far away points results in less accuracy.

  • Structure of the environment: Planar scenes don’t provide as much information to constraint the camera pose as more three-dimensional scenes due to a lack of parallax.

  • Movement of the camera relative to depth of the environment: SLAM systems become more confident about depth estimates as they observe far away points from wider baselines.

  • Motion blur: loss of high-frequency image data due to motion blur can result in less accuracy during fast camera movements.

  • Quality of map: tracking a well explored map where the bundle adjuster has managed to globally converge on an optimal state will result in more accuracy than tracking a poorly constructed map with little time for optimisation.