Positioning and Picking (2D Template Matching)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section describes the workpiece-recognition configuration workflow using 2D template matching. This method is used to search and locate workpiece features that match templates in 2D images and calculate workpiece poses. It is suitable for scenarios where recognition relies on workpiece edges or prominent shape features, requiring clear edge features and stable shape.

Click Configuration Wizard, select the Positioning and Picking scenario, and then select the 2D Template Matching method to enter this workflow.

Workflow

The complete recognition workflow includes four steps:

positioning and picking process
  1. Image Preprocessing: Perform preprocessing operations such as color conversion, enhancement, denoising, and morphological transformation on input images to improve image quality, highlight workpiece features, reduce background interference, and provide reliable data for subsequent workpiece recognition.

  2. Workpiece Recognition: Set regions of interest and flexibly configure matching parameters according to workpiece features for accurate recognition.

  3. Workpiece Pose Calculation: Using 2D camera extrinsic calibration data and teaching information of the reference workpiece (the workpiece used for teaching), automatically convert recognized 2D workpiece poses to 3D poses required for robot picking to achieve precise picking.

  4. General Settings: Configure pose-filtering rules and output ports to ensure output results meet downstream picking requirements.

Image Preprocessing

Before recognizing workpieces, based on input-image quality you can enable Convert Image Color Space or Image Preprocessing and adjust related parameters so image features become clearer, thereby improving recognition accuracy and efficiency.

Convert Image Color Space

Converting image color space transforms input images from one color space to another, for example from BGR to grayscale or from BGR to HSV. Through color-space conversion, image features can be highlighted better for subsequent image processing.

For detailed parameter descriptions and tuning examples, refer to Convert Image Color Space.

Image Preprocessing

In image preprocessing, you can apply operations such as enhancement, denoising, morphological transformation, grayscale inversion, and edge extraction to input images.

For detailed parameter descriptions and tuning examples, refer to Image Preprocessing.

Preview Preprocessing Results

After completing the above parameter settings, click Run Step or Run Project to preview preprocessing results.

Then click Next to enter the workpiece-recognition workflow.

Workpiece Recognition

After image preprocessing, configure recognition settings, including setting regions of interest and adjusting template-matching parameters, to achieve accurate workpiece recognition.

Add Recognition Parameter Group

After entering the workpiece-recognition workflow, the system creates one recognition parameter group by default to manage current regions of interest and related parameters.

  • Management Operations: Right-click the parameter-group name, or directly click function buttons on the right side of the parameter group to perform operations such as rename, delete, and create copy.

parameter group management operation
  • Create New Parameter Group: If a new parameter group is needed, click Add in the upper-right corner to create one. Each parameter group can independently set recognition regions and parameters without affecting others.

add parameter group

Set Recognition Region

When setting recognition regions, choose Entire Image as Recognition Region or Custom Recognition Region according to actual needs. If custom recognition is selected, click the Draw button to manually select the region. Ensure recognition targets are within selected region.

  • Entire Image as Recognition Region: Performs recognition on the whole image. This is typically suitable for scenarios where recognition targets are widely distributed.

  • Custom Recognition Region: Performs recognition only in selected regions. This is typically suitable when only part of an image needs attention or irrelevant areas (such as background and fixtures) should be excluded, helping improve recognition efficiency and accuracy.

Recognize Workpieces

Set Workpiece Template

After setting recognition regions, select or edit workpiece templates for subsequent recognition. Click Edit to enter the 2D Matching Template Editor.

Representative and stable edge features should be selected from the image to generate templates, ensuring the system can later automatically search and locate workpieces with matching template features, with unique and accurate results. For details, refer to 2D Matching Template Editor.

After each template edit, click Update to apply the latest configuration.

Adjust Recognition Parameters

After selecting templates, if multiple workpieces need to be recognized, it is recommended to first set Upper Limit of Matching Result Quantity (default value is 1) according to actual on-site needs, to limit maximum matching results output each time.

After configuration, click Run Step to view template-matching results and overall recognition performance.

If recognition performance is not ideal, continue adjusting other parameters based on actual workpiece features and recognition requirements.

For detailed parameter descriptions, refer to 2D Matching.

Then click Next to enter the workpiece-pose calculation workflow.

Workpiece Pose Calculation

This workflow collects reference data through teaching operations and establishes the mapping between vision recognition and robot picking poses, automatically converting real-time recognized 2D workpiece poses to robot 3D picking poses.

Required teaching operations and parameters vary with camera mounting mode (Eye to hand or Eye in hand).

Before starting specific teaching operations, make sure there is only one workpiece in camera field of view (if there are other workpieces, remove them from carrier first), and click Run Project so the system recognizes only this reference workpiece.

Teaching Instructions for ETH Scenarios

Operation Procedure

  1. Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.

  2. Click Get to obtain currently recognized 2D pose of the reference workpiece.

  3. Use the teach pendant to move the robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.

  4. After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.

Parameter Description

Parameter Description

Select Camera Step

Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step.

Reference Workpiece 2D Pose

Description: 2D pose of reference workpiece recognized during image capture.

Reference Picking Pose

Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant.

Teaching Instructions for EIH Scenarios

Operation Procedure

  1. Use teach pendant to move robot to image-capture point. Click Edit and enter flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant.

  2. Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.

  3. Click Get to obtain currently recognized 2D pose of the reference workpiece.

  4. Use teach pendant to move robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.

  5. After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.

Parameter Description

Parameter Description

Select Camera Step

Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step.

Reference Workpiece 2D Pose

Description: 2D pose of reference workpiece recognized during image capture.

Reference Picking Pose

Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant.

Flange Pose at Image Capture

Description: Flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant.

Robot Service Name in Communication Component

Description: Used to select robot model. It must be consistent with robot model connected in communication component.

After teaching is completed, place other workpieces back to carrier and click Run Project again so the system can batch recognize and output poses of all workpieces.

After completing workpiece-pose calculation, click Next to enter general settings workflow.

General Settings

In this workflow, auxiliary functions outside visual recognition can be configured, including pose-filtering rules and output ports.

Set Pose Filtering Rules

Based on actual requirements and pose data in Recognition Results, set upper and lower limits in X, Y, and Rz directions to filter output workpiece poses and remove results outside configured ranges.

Click Run Step or Run Project to view filtering status.

Configure Output Ports

Here, select output ports according to actual workpiece requirements. By default, workpiece names and recognized poses are output.

  • Matching Score: Outputs matching-score list used to evaluate quality of matching results.

After this port is selected, corresponding output port is added to the 2D Target Object Recognition step in real time.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.