Updated: Jul 9, 2019
Kudanが提供するKudanSLAMに代表されるSLAM (Simultaneous Localisation and Mapping)技術のためのカメラキャリブレーションについて概説します。
As we have covered before, SLAM systems use one or more cameras embedded on a device to simultaneously localise the device’s position and orientation whilst also mapping the environment. Before we deploy a SLAM system, it is crucial that we calibrate the camera to account for its internal properties and other external factors that can affect the images generated. In order to understand our camera calibration process at Kudan, we must first go over the properties of a camera.
Properties of a camera
Two different cameras at the same position and orientation can generate two different images. This is because the images generated depend on various properties of the camera we refer to as ‘intrinsic’ parameters. These include properties inherent to the camera such as the size of the camera/aperture, the focal length of its lens, the distortion caused by its lens and so on. Due to this, for single camera setups we must know and account for the camera’s intrinsic parameters in order for our SLAM algorithms to work.
Now for setups which include two cameras, we not only need to know each of the cameras’ internal properties, but also their ‘extrinsic’ parameters. Extrinsic parameters refer to the position and orientation of both the cameras (how far apart the cameras are and what angle they are facing relative to each other).
The calibration process
Not all the parameters can be measured by physically analysing the camera, especially intrinsic parameters such as distortion, which is why software libraries are often used to calculate these parameters by analysing the camera’s video feed. There are many software libraries to calibrate cameras, some are listed at the end of the article.
We begin the calibration process by showing a known object to the camera, such as a chessboard that we know the physical properties of ( the dimensions and the number of black and white squares in the case of the chessboard). These properties of the object are also stored in the software. We then move the object in front of the camera. As the exact physical properties of the object are stored in the software, it can automatically calculate the intrinsic and extrinsic parameters of the cameras by observing how the image distorts as the object is moved.
Camera 1 Camera 2
As we can see, images can be severely distorted by the camera and straight lines can appear curved. It is therefore very important to calibrate the cameras before deployment and by having a robust calibration process, we can ensure that our algorithms are always performing at their best with different camera setups.