Calibration-related Concepts

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

Robot communication mode

It refers to how the robot communicates with the vision system. The Mech-Mind Vision System provides three communication modes, namely, Standard Interface, Adapter and Master-Control. For details, refer to the section Communication Overview.

Camera mounting mode

It refers to the way that the camera is mounted in the working unit. Commonly seen mounting modes are eye to hand (ETH) and eye in hand (EIH). In addition, to expand the camera’s field of view and improve the point cloud quality, a project may have two cameras installed for one station, which is called eye to eye (ETE).

Calibration mode

It refers whether the calibration board images and poses are collected automatically. It can be divided into automatic calibration and manual calibration. The operations of manual calibration are relatively complicated. Therefore, automatic calibration is recommended whenever possible.

Automatic calibration (recommended)

During calibration, the robot will be connected, and Mech-Vision automatically plans the calibration path, and controls the robot to move along the planned path. It collects calibration board images and robot flange poses at each waypoint.

Manual calibration

During calibration, the robot is not connected, and you need to control the robot manually, move the robot along the path you planned or to touch the calibration circles, enter the robot flange poses, and trigger the software to capture calibration board images.

Calibration data collect method

It refers to how the calibration data is collected. Mech-Vision supports two calibration data collection methods, namely multiple random calibration board poses and TCP touch.

Multiple random calibration board poses (recommended)

It lets the robot move along the waypoints on the robot path automatically generated or you planned. This method captures images of calibration boards, detects calibration circles and collects robot flange poses at each waypoint on the paths in order to calculate the correct relationship between the calibration board, camera, and robot. It is easy to perform and has a high accuracy. It is recommended for 6-axis or 4-axis robots.

TCP touch

This method first determines the pose of the calibration board through the three-point touching method, and then establishes the spatial relationship among the calibration board, camera, and robot. This method suits situations where the robot is installed in a limited space and where the calibration board cannot be installed. It is recommended for 5-axis or other robots.

Camera intrinsic parameters

Intrinsic parameters are internal to a camera, including the focal length, the lens distortion, etc. These parameters are usually calibrated and stored in the camera before the camera leaves the factory.

Camera extrinsic parameters

Extrinsic parameters describe the pose transformation between the robot reference frame and the camera reference frame. The calibration of extrinsic parameters is also called the hand-eye calibration, where the camera is considered as the eye, and the robot the hand. As the spatial relationship between the robot and the camera changes from application to application, the hand-eye calibration needs to be conducted on-site to guarantee its accuracy.

Calibration point

It refers to the robot pose when the camera captures images of the calibration board during calibration. When the multiple random calibration board poses method is used to collect calibration data, the calibration points are the waypoints on the calibration path. When the TCP touch method is used to collect calibration data, the image of the calibration board only needs to be captured once after you touch three calibration circles that are not in a line.

Calibration circle

It refers to the circle-shaped feature point on the calibration board. During calibration, the software calculates the pixel coordinates of the center points of the calibration circles based on the 2D image and depth map of the calibration board, and the coordinates of the center points relative to the camera frame based on the depth image. It then calculates camera extrinsic parameters based on collected calibration circle data.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.