Calibration-related Concepts
- Robot communication mode
-
It refers to how the robot communicates with the vision side. Mech-MindVision System provides three communication modes, namely, Standard Interface, Adapter and Master-control. For details, refer to the section Communication Overview.
- Camera mounting mode
-
It refers to the way that the camera is mounted in the working unit. Commonly seen mounting modes are Eye to Hand (ETH) and Eye in Hand (EIH). In addition, to expand the camera’s field of view and improve the point cloud quality, a project may have two cameras installed for one station, which is called Eye to Eye (ETE).
- Calibration mode
-
It refers whether the calibration board images and poses are collected automatically. It can be divided into automatic calibration and manual calibration. The operations of manual calibration are relatively complicated. Therefore, automatic calibration is recommended whenever possible.
- Automatic calibration (recommended)
-
During calibration, the robot will be connected, and Mech-Vision automatically plans the calibration path, and controls the robot to move along the planned path. It collects calibration board images and robot flange poses at each waypoints.
- Manual calibration
-
During calibration, the robot is not connected, and you need to manually control the robot manually, move the robot along the path you planned or to touch the calibration circles, enter the robot flange poses, and trigger the software to capture calibration board images.
- Calibration data collect method
-
It refers to how the calibration data is collected. Mech-Vision supports two calibration data collect methods, namely multiple random calibration board poses and TCP touch.
- Multiple random calibration board poses (recommended)
-
It lets the robot to move along the waypoints on the robot path automatically generated or you planned. This method captures images of calibration boards, detect calibration circles and collect robot flange poses at each waypoint on the paths in order to calculate the correct relationship between the calibration board, camera, and robot. It is easy to perform and has a high accuracy. It is recommended for 6-axis or 4-axis robots.
- TCP touch
-
This method first determines the pose of the calibration board through the three-point touching method, and then establishes the spatial relationship among the calibration board, camera, and robot. This method suits for situations where the robot is installed in a limited space and where the calibration board cannot be installed. It is recommended for 5-axis or other robots.
- Camera intrinsic parameters
-
Intrinsic parameters are internal to a camera, including the focal length, the lens distortion, etc. These parameters are usually calibrated and stored in the camera before the camera leaves the factory.
- Camera extrinsic parameters
-
Extrinsic parameters describe the pose transformation between the robot reference frame and the camera reference frame. The calibration of extrinsic parameters is also called the hand-eye calibration, where the camera is considered as the eye, and the robot the hand. As the spatial relationship between the robot and the camera changes from application to application, the hand-eye calibration needs to be conducted on site to guarantee its accuracy.
- Calibration point
-
It refers to the robot pose when the camera captures of the calibration board during calibration. When the multiple random calibration board poses method is used to collect calibration data, the calibration points are the waypoints on the calibration path. When the TCP touch method is used to collect calibration data, the image of the calibration board only needs to be captured once after you touch three calibration circles that are not in a line.
- Calibration circle
-
It refers to the circle-shaped feature point on the calibration board. During calibration, the software calculates the pixel coordinates of the center points of the calibration circles based on the 2D image of the calibration board, and the coordinates of the center points relative to the camera frame based on the depth image. It then calculate camera extrinsic parameters based on collected calibration circle data.