Terms and Concepts

You are currently viewing the documentation for the latest version (2.1.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

This section introduces the key terms and concepts related to machine vision.

Solution and Project

Solution library

A resource library containing numerous practical features and industry cases.

Solution

A solution is a collection of functional configurations and data for robot, communication, vision processing, and path planning required for a vision application.

A solution can contain several Mech-Vision projects, but at most one Mech-Viz project. In the same solution, if the Mech-Vision project contains the Path Planning Step, the Mech-Viz project and the path planning tool in Mech-Vision cannot be opened together, and the Mech-Vision and Mech-Viz projects share the tool and target object configurations.

Project

Projects refer to Mech-Vision projects. A project must belong to a solution, or else it cannot be used. In addition, a solution may include multiple projects. Projects cannot be used independently and must belong to a solution.

Step

Steps are the basics of a project. A Step is a minimum algorithm unit for data processing. By connecting different Steps in a project, you can achieve different data processing tasks.

Procedure

A Procedure groups the Steps whose functions are closely related and achieve a specific purpose concertedly in the project.

Parameter recipe

Parameter recipes are sets of parameter settings that need to be adjusted according to different situations for the same project. With parameter recipes, you do not need to build multiple projects with the same logic and configure their parameters differently to meet different on-site requirements. Instead, you will only need to switch between parameter recipes in one project to make it applicable to various scenarios and therefore the productivity can be increased.

Hand-eye calibration

Hand-eye calibration

Hand-eye calibration establishes the transformation relationship between the camera and robot reference frames. With this relationship, the object poses determined by the vision system can be transformed into those in the robot reference frame, which guides the robot to perform its tasks.

Intrinsic parameters

The camera intrinsic parameters describe the internal properties of the camera. These parameters are typically fixed for a specific camera model and remain unchanged during the camera’s usage.

Extrinsic parameters

The camera extrinsic parameters describe the position and orientation of the camera in the world coordinate system.

Euler angles

Euler angles are used to describe the orientation of an object in the 3D space. The object’s rotation in the 3D space can be denoted by 3 angles, i.e., pitch, yaw, and roll.

TCP (Tool Center Point)

Tool center point is the pinpoint at the end of the robot tool. In order to complete tasks such as picking, we usually say that the robot should move to a specific point in space, which actually means its TCP should move to that point.

Vision Processing

Point cloud

A point cloud is a collection of points in the 3D space, each containing at least three coordinate values (X, Y, Z), used to describe the geometric shape of an object’s surface accurately.

Pose

Pose describes the posture of an object by defining its position and orientation.

  • Position: Represented by the coordinates of the object’s center or reference point in 3D space, typically expressed as three real numbers.

  • Orientation: Describes the object’s direction in 3D space, commonly represented by a rotation matrix, Euler angles, or quaternions.

Mask

Specified images, shapes, or objects are usually selected to mask the image (partial or all) to control the area to process or the processing. The particular image or object used for masking is called a mask.

ROI (Region of Interest)

The region of interest excludes surrounding parts unnecessary for vision data processing in the scene. Such parts may be background objects, pallets, bin edges, etc.

Deep Learning

Model Package

After a model has been trained in Mech-DLK, it can be exported as a model package, which contains one or more models. In Mech-Vision, the model package can be used for the inference of the image data.

Super Model Package

A universal model provided by Mech-Mind used to recognize cartons or sacks. If the recognition performance is unsatisfactory, Mech-DLK can be used to fine-tune the model.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.