Positioning and Picking (2D Blob Analysis)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section describes the workpiece-recognition configuration workflow using 2D Blob Analysis. This method detects bright/dark regions in images (Blobs), filters target Blobs by geometric features such as area and circularity, and achieves workpiece positioning. It is suitable for scenarios where recognition relies on brightness-contrast features and workpieces appear as clear bright or dark regions.

Click Configuration Wizard, select the Positioning and Picking scenario, and then select the 2D Blob Analysis recognition method to enter this workflow.

Workflow

The complete recognition workflow includes four steps:

positioning and picking process
  1. Image Preprocessing: Perform preprocessing operations such as color conversion, enhancement, denoising, and morphological transformation on input images to improve image quality, highlight workpiece features, reduce background interference, and provide reliable data for subsequent workpiece recognition.

  2. Workpiece Recognition: Set regions of interest and flexibly configure Blob-analysis parameters according to workpiece features for accurate recognition.

  3. Workpiece Pose Calculation: Using 2D camera extrinsic calibration data and teaching information of the reference workpiece (the workpiece used for teaching), automatically convert recognized 2D workpiece poses to 3D poses required for robot picking to achieve precise picking.

  4. General Settings: Configure pose-filtering rules and output ports to ensure output results meet downstream picking requirements.

Image Preprocessing

Before recognizing workpieces, based on input-image quality you can enable Convert Image Color Space or Image Preprocessing and adjust related parameters so image features become clearer, thereby improving recognition accuracy and efficiency.

Convert Image Color Space

Converting image color space transforms input images from one color space to another, for example from BGR to grayscale or from BGR to HSV. Through color-space conversion, image features can be highlighted better for subsequent image processing.

For detailed parameter descriptions and tuning examples, refer to Convert Image Color Space.

Image Preprocessing

In image preprocessing, you can apply operations such as enhancement, denoising, morphological transformation, grayscale inversion, and edge extraction to input images.

For detailed parameter descriptions and tuning examples, refer to Image Preprocessing.

Preview Preprocessing Results

After completing the above parameter settings, click Run Step or Run Project to preview preprocessing results.

Then click Next to enter the workpiece-recognition workflow.

Workpiece Recognition

After image preprocessing, configure recognition, including setting recognition regions of interest and adjusting 2D Blob-analysis parameters, to achieve accurate workpiece recognition.

Add Recognition Parameter Group

After entering the workpiece-recognition workflow, the system creates one recognition parameter group by default to manage current regions of interest and related parameters.

  • Management Operations: Right-click the parameter-group name, or directly click function buttons on the right side of the parameter group to perform operations such as rename, delete, and create copy.

parameter group management operation
  • Create New Parameter Group: If a new parameter group is needed, click Add in the upper-right corner to create one. Each parameter group can independently set recognition regions and parameters without affecting others.

add parameter group

Set Recognition Region

When setting recognition regions, customize them according to actual needs. The system supports both rectangular and circular selection modes and allows mixed addition of multiple regions. That is, multiple rectangular and circular recognition regions can coexist on the same image to meet recognition requirements in complex scenarios.

Recognize Workpieces

After setting recognition regions, adjust other parameters based on actual workpiece features and recognition requirements to optimize recognition performance.

Parameter Description

Blob Polarity

Description: Defines what kind of pixel regions, compared with background, are recognized as target connected regions (Blobs).

Value list:

  • Darker than Background: Select connected pixels darker than background.

  • Brighter than Background: Select connected pixels brighter than background.

Threshold Type

Description: Specifies the threshold-calculation method for image binarization. Pixels with grayscale values greater than the threshold are classified as foreground, and those lower than the threshold are classified as background.

Value list:

  • Manual: Manually set a fixed global threshold.

  • Auto: System automatically calculates the optimal threshold.

Neighborhood Type

Description: Specifies the connectivity rule between pixels, determining which pixels are grouped into one Blob.

Value list:

  • Four-neighborhood: Pixels connected in up, down, left, and right directions are grouped into one Blob.

  • Eight-neighborhood: Pixels connected in up, down, left, right, and diagonal directions are grouped into one Blob.

Contour Retrieval Mode

Description: Sets retrieval mode for extracting Blob contours.

Value list:

  • External Contours: Detect and extract only outermost contours, ignoring all inner holes and nested contours.

  • All Contours: Extract all contours and build a complete hierarchy.

Filter Settings

Description: Used to set filter criteria to select Blobs that meet specific geometric features. Click Open Editor and configure related parameters in the Filter Settings window.

Logic Between Conditions

Description: Logical filtering rule. Used to set unified logic (AND/OR) between multiple added filtering conditions (such as area, bounding-rectangle aspect ratio, and circularity). Different condition types are combined according to Logic Between Conditions (AND/OR); repeated additions of the same condition type are always combined by OR, regardless of this setting.

Value list: AND, OR

Adjustment instruction: Click Add Condition, select filtering conditions from the drop-down list, and set logic between conditions. For definitions and descriptions of conditions, refer to Description of Filtering Conditions. You can set Filter Value Range according to Reference Value Range. Each condition can also be individually enabled/disabled or deleted.

Filter Parameter Group

Description: Used to select a filter-parameter group created in the editor so corresponding filter criteria can be applied during recognition.

Sorting Basis

Description: Specifies the basis used to sort detected Blobs.

Value list: Area, Total Area, Bounding-rectangle Width, Bounding-rectangle Height, Bounding-rectangle Aspect Ratio, Major-axis Angle, Circularity, Bounding-rectangle Center X, Bounding-rectangle Center Y, Inscribed-circle Radius, Circumscribed-circle Radius, Inscribed-rectangle Width, Inscribed-rectangle Height, Centroid X, Centroid Y, Bounding-rectangle Top-left X, Bounding-rectangle Top-left Y, Bounding-rectangle Bottom-right X, Bounding-rectangle Bottom-right Y, Rotated Bounding-rectangle Width, Rotated Bounding-rectangle Height, Z-shape.

Adjustment instruction: When Z-shape is selected, configure Sorting Start Direction, Cross-row/Cross-column Direction, Layer Interval, and Layering Reference.

Sorting Direction

Description: Specifies sorting direction.

Value list: Ascending, Descending

Sorting Start Direction

Description: Specifies start direction for Z-shape sorting.

Value list:

  • Row first, from left to right

  • Row first, from right to left

  • Column first, from top to bottom

  • Column first, from bottom to top

Cross-row/Cross-column Direction

Description: Specifies cross-row or cross-column direction for Z-shape sorting.

Value list:

  • Top to bottom

  • Bottom to top

  • Left to right

  • Right to left

Layer Interval

Description: Blobs are layered according to this interval. When sorting is row-first, this parameter indicates row interval of Blobs; when sorting is column-first, it indicates column interval of Blobs.

Layering Reference

Description: Specifies the start position for layering. For example, when sorting is row-first, the system arranges the first row based on this position, then continues arranging other rows upward/downward according to configured Layer Interval.

You can also learn more about parameter usage through Parameter Tuning Example.

View Running Results

After completing the above parameter settings, click Run Step or Run Project to view workpiece recognition results.

Then click Next to enter the workpiece-pose calculation workflow.

Workpiece Pose Calculation

This workflow collects reference data through teaching operations and establishes the mapping between vision recognition and robot picking poses, automatically converting real-time recognized 2D workpiece poses to robot 3D picking poses.

Required teaching operations and parameters vary with camera mounting mode (Eye to hand or Eye in hand).

Before starting specific teaching operations, make sure there is only one workpiece in camera field of view (if there are other workpieces, remove them from carrier first), and click Run Project so the system recognizes only this reference workpiece.

Teaching Instructions for ETH Scenarios

Operation Procedure

  1. Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.

  2. Click Get to obtain currently recognized 2D pose of the reference workpiece.

  3. Use the teach pendant to move the robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.

  4. After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.

Parameter Description

Parameter Description

Select Camera Step

Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step.

Reference Workpiece 2D Pose

Description: 2D pose of reference workpiece recognized during image capture.

Reference Picking Pose

Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant.

Teaching Instructions for EIH Scenarios

Operation Procedure

  1. Use teach pendant to move robot to image-capture point. Click Edit and enter flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant.

  2. Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.

  3. Click Get to obtain currently recognized 2D pose of the reference workpiece.

  4. Use teach pendant to move robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.

  5. After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.

Parameter Description

Parameter Description

Select Camera Step

Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step.

Reference Workpiece 2D Pose

Description: 2D pose of reference workpiece recognized during image capture.

Reference Picking Pose

Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant.

Flange Pose at Image Capture

Description: Flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant.

Robot Service Name in Communication Component

Description: Used to select robot model. It must be consistent with robot model connected in communication component.

After teaching is completed, place other workpieces back to carrier and click Run Project again so the system can batch recognize and output poses of all workpieces.

After completing workpiece-pose calculation, click Next to enter general settings workflow.

General Settings

In this workflow, auxiliary functions outside visual recognition can be configured, including pose-filtering rules and output ports.

Set Pose Filtering Rules

Based on actual requirements and pose data in Recognition Results, set upper and lower limits in X, Y, and Rz directions to filter output workpiece poses and remove results outside configured ranges.

Click Run Step or Run Project to view filtering status.

Configure Output Ports

Select output ports according to actual workpiece requirements. By default, workpiece names and recognized poses are output.

  • Blob Mask: Outputs Blob mask images.

After selecting this port, the corresponding output port is added to the 2D Target Object Recognition step in real time.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.