Terminology

Tip

Click the name of the terms for more detailed information.

Mech-Mind Vision System

Mech-Mind Vision System is the full vision-based solution provided by Mech-Mind Robotics, including Mech-Eye Industrial 3D Cameras, Mech-Mind Software Suite, the robot, peripheral devices, and accessories, etc.

Vision Processing: Mech-Vision

Software Function Structure

Solution

A solution is a collection of Mech-Vision project(s) for an on-site application. The file directory of a solution stores the data files for project data, interface configuration, robot model, end tools, etc. that are required for vision processing (and for path planning for some applications).

Mech-Vision Project

A function unit that takes 2D and 3D image data from the camera(s), makes vision processing, and outputs the vision processing result. A project includes the vision data and the Steps connected together to process the data flows.

Step

In Mech-Vision, a Step is a minimum functional unit in a project. Vision data flows through the Steps to get processed.

Input port

An input port of a Step takes the input data of specified type and purpose from a preceding Step for the Step to process.

Output port

An output port of a Step outputs the data of specified type and purpose from the Step for the succeeding Step(s) to process or make other operations.

Procedure

A Procedure contains multiple connected Steps that together implement a particular vision data processing function.

Before Processing

Scene

Everything captured by the camera, including the background, bin, objects, etc.

Target object

The objects are things in the scene on which the poses need to be calculated and which need to be processed/picked by the robot. Objects can be workpieces, partitions, bins, etc. depending on requirements.

Background

The scene without the objects.

Region of interest (ROI)

The region excluding surrounding parts unnecessary for vision data processing in the scene. Such parts may be background objects, pallets, bin edges, etc. An ROI can be either set by selecting a 3D box on the point cloud, or a 2D box on the depth map/image.

Highest layer

The part in the scene defined by a specified range of height, usually on the top of the scene containing point clouds of objects most convenient for processing/picking. Highest layer point cloud extraction is frequently used in carton palletizing/depalletizing.

Calibration pose

The robot pose for calibration, in the form of TCP, flange pose or joint positions as required.

Calibration circle

The circles on the calibration board.

Eye in Hand (EIH)

EIH is the setup where the camera is installed on the flange located on the end of the robot.

Eye to Hand (ETH)

ETH is the setup where the camera is installed on a bracket independent from the robot.

During Processing

3D matching

The process of fitting an object point cloud model onto the point cloud of an object in the scene, to find the poses of the objects in the scene. An object point cloud model reflects object shape and features and carries the defined object pose.

Model

The object point cloud model used in 3D matching. The model can be made by the Matching Model and Pick Point Editor, either from the object point cloud or an STL model.

Surface model

The model used to match objects by their surface features. A surface model includes the object surface feature parts and excludes other unnecessary parts.

Surface matching

Matching by object surface features. When the object have obvious fluctuating features its face(s), surface matching is recommended. Such objects include crankshafts, rotors, steel bars, etc.

Edge model

The model used to match objects by their edge features. An edge model includes the object edge feature parts, excluding other unnecessary parts.

Edge matching

Matching by object edge features. When the object is flat but present clear edge features under the camera, edge matching is recommended. Such objects include panels, track shoes, connecting rods, brake discs, etc. The Matching Model and Point Cloud Editor can help generate an edge model.

2D matching

The process of fitting a 2D shape that reflects object shape and features and carries the defined object pose onto the image of the scene, to find the poses of the objects in the scene.

2D template

The 2D shape that reflects object shape and features used in 2D matching.

Deep learning

In Mech-Vision, deep learning is a technique usually used to recognize and classify objects and find object poses. A trained deep learning model is exported from Mech-DLK and used by deep learning Steps in Mech-Vision.

Inference

Use a trained deep learning model to make predictions on the actual vision data to obtain information including poses, classification labels, etc.

Intrinsic Parameters

Intrinsic parameters are measures of the properties of a camera itself, including the focal length, the lens distortion, etc. These parameters are usually calibrated and stored in the camera before the camera leaves the factory.

Extrinsic Parameters

Extrinsic parameters define how poses are transformed between the robot reference frame and the camera reference frame.

Point Cloud

A point cloud is a set of data points in space that represents a 3D shape or object.

After Processing

Vision result

A vision result is the output by the execution of a Mech-Vision project at a time. A vision result may contain multiple vision points and other data.

Vision point

A vision point refers to a calculated pose and its associated data, as follows.

Object pose (a.k.a. vision pose)

The pose of the object calculated by the Mech-Vision project. A pose contains the position information (X, Y, Z coordinates) and the orientation information (either in Euler angles or in quaternions).

Label

The string label attached to each pose. Usually for indicating the object type.

Object dimensions

The dimensions of the corresponding object of the object pose. In the form of (length, width, height), or (radius, height), or others.

Pick point

The pose on the object on which the robot can pick the object. In some cases, the pick point is equivalent to object pose. The pick point and the robot’s picking pose (in the form of TCP) usually coincide but have opposite Z axes.

Note

The information contained in a vision point may include other custom types of data associated to the object pose in the vision point, such as recommended robot velocity, pose offset, etc. You can customize the data types at Step “Procedure Out” in Mech-Vision.

Static Background

The parts for the background, i.e., the parts other than the target objects to pick, in captured images and depth maps. The information can be used for filtering out the background or calculating the height of the objects.

Procedure

Procedure is a container of more than one Step in Mech-Vision that implement a defined function.

Instance Segmentation

Instance segmentation is the process of recognizing objects in an image, marking their contours pixel by pixel, and labeling them based on their categories.

Image Classification

Image classification is an image processing method that classifies object images based on the object categories in them.

Morphological Transformation

Morphological transformations refer to some simple operations on an image, such as erosion, dilation, etc.

Erosion

Erosion is one of the fundamental operations in morphological image processing. It “erodes” spots with high brightness in the input image and outputs an image with reduced bright regions.

Dilation

Dilation is one of the fundamental operations in morphological image processing. Contrary to erosion, it “expands” spots with high brightness in the input image and outputs an image with enhanced bright regions. Please note that erosion and dilation processes are not reversible.

Normal

A normal is a vector that is perpendicular to the surface at a given point, to describe the orientation of the surface at the given point.

Threshold

A threshold specifies an upper or lower limit on a measure. When a measure reaches a threshold, something else changes or happens.

Boolean

Boolean is a data type. A boolean value is either “True” or “False”.

Hash

Hash value is the value returned by a hash function. It can be simply understood as the ID of a piece of data.

Robot Path Planning: Mech-Viz

Software Function Structure

Mech-Viz Project

Projects refer to the robot path planning projects created in Mech-Viz. Once you have completed the necessary setup of the project, you can use the project to plan a path and guide the robot to move. All the configurations of the project are stored in the folder with the same name as the project.

Project Resources

Project resources refer to various fundamental resources used in the project, including the robot, tools, workobjects, and scene objects.

Workflow

The logic flowchart and the related parameter settings for robot path planning.

Step

Steps are function modules for robot programming.

Procedure

A Procedure contains multiple connected Steps.

Robot&Object Settings

Simulation space

The space containing all contents involved in the workflow, including the robot, the workobjects, the bin, and other objects.

Scene object

Any solid bodies aside from the robot and the workobjects, including the bin, the pallet, elements of the working platform, etc.

Visualization model

The solid body simulation for visualizing the corresponding thing in the space. It will not be used for collision detection.

Collision model

The solid body simulation for detecting collisions of the corresponding thing in the space during path planning.

Workobject

The object that the robot needs to process/pick.

Workobject symmetry

The property that, after a rotation around the rotational symmetry axis for a certain angle, the appearance of a workobject is considered to be coincident with that before the rotation.

N-fold symmetry

The property that after a rotation by an angle of 360°/N, the workobject shape is considered unchanged.

Number of symmetry folds

The value of N in the definition of “N-fold symmetry”.

Symmetry angle

The value of 360°/N in the definition of “N-fold symmetry”.

Note

number of symmetry folds * symmetry angle = 360°

Workobject pick point

The pose on the workobject on which the robot can pick the workobject. In some cases, the pick point is equivalent to the workobject reference frame. In other cases, the pick point is obtained by offsetting the workobject reference frame, especially when one workobject has multiple pick points. The pick point and the robot’s picking pose (in the form of TCP) usually coincide but have opposite Z axes.

Picking relaxation

The permission to rotate the tool around the workobject frame to make attempts at different angles to facilitate picking.

Tool

The device mounted on the robot end that performs processing/picking jobs.

Gripper

A tool used to pick workobjects, by hooking, vacuuming, grabbing, etc.

Array gripper

A gripper that has an array of sub-grippers.

Vacuum gripper

A gripper that picks objects with suction cups.

Edge-corner ID

The numbers used to identify the specific edges or corners of suction cups on a vacuum gripper.

Path Planning

Path

A path is a sequence of waypoints that the robot needs to reach one by one.

Waypoint

A point (presented as a robot pose in JPs or TCP) in the path that the robot needs to reach. A waypoint can contain additional information including label, motion type (linear/joint move), velocity, acceleration, etc.

Home position

A default robot pose that the robot should return to before the start of a job or after the completion of a job.

Initial pose

The robot pose before the beginning of a job. The path planning needs to accept the initial pose and take it into consideration.

Trajectory

A trajectory is the record of a sequence of waypoints that robot has physically reached, carrying the timestamps.

Robot pose

The status of the robot in the 3D space, presented in the form of TCP, JPs, or flange pose.

Workobject waypoint

The waypoint at which the robot processes/picks the workobject.

Picking waypoint

The waypoint at which it is planned that the robot should pick a workobject.

Picking pose

The pose of the robot when it picks the workobject.

Placing waypoint

The waypoint at which it is planned that the robot should place a workobject.

Placing pose

The pose of the robot when it places the workobject.

Point cloud cube

The cube simulated around each point in the point cloud to define point cloud volume for collision detection.

Point cloud cube size

The edge length of a point cloud cube.

Collision volume

During simulation, if one party involved in the collision detection is the point cloud, the collision volume is the number of point cloud points that overlap with the other party’s collision model times the volume of a point cloud cube.

Robot

A robot refers to a system composed of rigid bodies connected by joints that moves to achieve a purpose such as picking, gluing, spraying, etc.

Tool Center Point (TCP)

TCP refers to the end point of the end effector, as a pose.

Joint Positions (JPs)

Joint positions (JPs) are also known as joint angles. It is the angles that describe the status of robot joints, i.e., angles formed between links of a robot.

Pose

Pose is the values that describe the position (typically in XYZ coordinates) and orientation (typically in quaternions or Euler angles) of an object.

Object Pose

Object pose usually is the pose of an object’s center in the robot base reference frame. Object pose can be the pose of an object’s feature point other than the center point when necessary.

Rotational Symmetry of the Workobject

Object symmetry describes in what orientations a target object can be treated in the same way for picking. It helps improve the success rate of picking and trajectory planning, especially when it is hard for the robot to reach a target object from certain angles.

Robot Singularity

Robot singularities are robot poses that are impossible to reach. If the end effector is to reach these poses, the robot joint speed will theoretically have to be infinite (not achievable in practice).

Singularity Threshold

A singularity threshold is the maximum joint angular velocity that the robot is allowed to reach. It is used to check singularities in Mech-Viz.

Singularity Vel Decelerate Ratio

Singularity velocity decelerate ratio refers to the lowest accepted velocity decelerate ratio that a robot should have when the robot encounters a singularity.

Joint Motion

Joint motion refers to the motion type of the robot in which the robot moves according to defined change in joint angles.

Linear Motion

Linear motion refers to the motion type of the robot in which the end effector moves in a straight line between two targets.

Euler angles

Euler angles include three angles that define the 3D orientation of an object.

Quaternions

Quaternions include four quaternion values that define the 3D orientation of an object.

Mech-Center Terms

Big Endian / Little Endian
Big Endian: The most significant byte is placed at the lower address (aka. network byte order).

Lower address ——————–> Higher address 0x12 | 0x34 | 0x56 | 0x78

Little Endian: The least significant byte is placed at the lower address.

Lower address ——————–> Higher address 0x78 | 0x56 | 0x34 | 0x12

Others

Flange

The flange is the part that connects two shafts and is mainly used for strengthening or attachment.

Dongle

This is a security device that enables certain softwares.

Industrial PC

An industrial PC is a ruggedized computer intended for industrial purposes. It can be used as an industrial controller.

Programmable Logic Controller (PLC)

A PLC is a logic controller that is used for automated controls.

Takt time

This is the overall processing time taken from capturing the image to the robot completing a certain task. Specifically, it includes the time required to capture the image on the camera, process in Mech-Vision, plan in Mech-Viz, and complete the motion on the robot.