Positioning and Picking (2D Blob Analysis)
This section describes the workpiece-recognition configuration workflow using 2D Blob Analysis. This method detects bright/dark regions in images (Blobs), filters target Blobs by geometric features such as area and circularity, and achieves workpiece positioning. It is suitable for scenarios where recognition relies on brightness-contrast features and workpieces appear as clear bright or dark regions.
Click Configuration Wizard, select the Positioning and Picking scenario, and then select the 2D Blob Analysis recognition method to enter this workflow.
Workflow
The complete recognition workflow includes four steps:
-
Image Preprocessing: Perform preprocessing operations such as color conversion, enhancement, denoising, and morphological transformation on input images to improve image quality, highlight workpiece features, reduce background interference, and provide reliable data for subsequent workpiece recognition.
-
Workpiece Recognition: Set regions of interest and flexibly configure Blob-analysis parameters according to workpiece features for accurate recognition.
-
Workpiece Pose Calculation: Using 2D camera extrinsic calibration data and teaching information of the reference workpiece (the workpiece used for teaching), automatically convert recognized 2D workpiece poses to 3D poses required for robot picking to achieve precise picking.
-
General Settings: Configure pose-filtering rules and output ports to ensure output results meet downstream picking requirements.
Image Preprocessing
Before recognizing workpieces, based on input-image quality you can enable Convert Image Color Space or Image Preprocessing and adjust related parameters so image features become clearer, thereby improving recognition accuracy and efficiency.
Convert Image Color Space
Converting image color space transforms input images from one color space to another, for example from BGR to grayscale or from BGR to HSV. Through color-space conversion, image features can be highlighted better for subsequent image processing.
For detailed parameter descriptions and tuning examples, refer to Convert Image Color Space.
Image Preprocessing
In image preprocessing, you can apply operations such as enhancement, denoising, morphological transformation, grayscale inversion, and edge extraction to input images.
For detailed parameter descriptions and tuning examples, refer to Image Preprocessing.
Workpiece Recognition
After image preprocessing, configure recognition, including setting recognition regions of interest and adjusting 2D Blob-analysis parameters, to achieve accurate workpiece recognition.
Add Recognition Parameter Group
After entering the workpiece-recognition workflow, the system creates one recognition parameter group by default to manage current regions of interest and related parameters.
-
Management Operations: Right-click the parameter-group name, or directly click function buttons on the right side of the parameter group to perform operations such as rename, delete, and create copy.
-
Create New Parameter Group: If a new parameter group is needed, click Add in the upper-right corner to create one. Each parameter group can independently set recognition regions and parameters without affecting others.
Set Recognition Region
When setting recognition regions, customize them according to actual needs. The system supports both rectangular and circular selection modes and allows mixed addition of multiple regions. That is, multiple rectangular and circular recognition regions can coexist on the same image to meet recognition requirements in complex scenarios.
Recognize Workpieces
After setting recognition regions, adjust other parameters based on actual workpiece features and recognition requirements to optimize recognition performance.
| Parameter | Description |
|---|---|
Blob Polarity |
Description: Defines what kind of pixel regions, compared with background, are recognized as target connected regions (Blobs). Value list:
|
Threshold Type |
Description: Specifies the threshold-calculation method for image binarization. Pixels with grayscale values greater than the threshold are classified as foreground, and those lower than the threshold are classified as background. Value list:
|
Neighborhood Type |
Description: Specifies the connectivity rule between pixels, determining which pixels are grouped into one Blob. Value list:
|
Contour Retrieval Mode |
Description: Sets retrieval mode for extracting Blob contours. Value list:
|
Filter Settings |
Description: Used to set filter criteria to select Blobs that meet specific geometric features. Click Open Editor and configure related parameters in the |
Logic Between Conditions |
Description: Logical filtering rule. Used to set unified logic (AND/OR) between multiple added filtering conditions (such as area, bounding-rectangle aspect ratio, and circularity). Different condition types are combined according to Value list: AND, OR Adjustment instruction: Click Add Condition, select filtering conditions from the drop-down list, and set logic between conditions. For definitions and descriptions of conditions, refer to Description of Filtering Conditions. You can set |
Filter Parameter Group |
Description: Used to select a filter-parameter group created in the editor so corresponding filter criteria can be applied during recognition. |
Sorting Basis |
Description: Specifies the basis used to sort detected Blobs. Value list: Area, Total Area, Bounding-rectangle Width, Bounding-rectangle Height, Bounding-rectangle Aspect Ratio, Major-axis Angle, Circularity, Bounding-rectangle Center X, Bounding-rectangle Center Y, Inscribed-circle Radius, Circumscribed-circle Radius, Inscribed-rectangle Width, Inscribed-rectangle Height, Centroid X, Centroid Y, Bounding-rectangle Top-left X, Bounding-rectangle Top-left Y, Bounding-rectangle Bottom-right X, Bounding-rectangle Bottom-right Y, Rotated Bounding-rectangle Width, Rotated Bounding-rectangle Height, Z-shape. Adjustment instruction: When Z-shape is selected, configure Sorting Start Direction, Cross-row/Cross-column Direction, Layer Interval, and Layering Reference. |
Sorting Direction |
Description: Specifies sorting direction. Value list: Ascending, Descending |
Sorting Start Direction |
Description: Specifies start direction for Z-shape sorting. Value list:
|
Cross-row/Cross-column Direction |
Description: Specifies cross-row or cross-column direction for Z-shape sorting. Value list:
|
Layer Interval |
Description: Blobs are layered according to this interval. When sorting is row-first, this parameter indicates row interval of Blobs; when sorting is column-first, it indicates column interval of Blobs. |
Layering Reference |
Description: Specifies the start position for layering. For example, when sorting is row-first, the system arranges the first row based on this position, then continues arranging other rows upward/downward according to configured |
You can also learn more about parameter usage through Parameter Tuning Example.
Workpiece Pose Calculation
This workflow collects reference data through teaching operations and establishes the mapping between vision recognition and robot picking poses, automatically converting real-time recognized 2D workpiece poses to robot 3D picking poses.
Required teaching operations and parameters vary with camera mounting mode (Eye to hand or Eye in hand).
| Before starting specific teaching operations, make sure there is only one workpiece in camera field of view (if there are other workpieces, remove them from carrier first), and click Run Project so the system recognizes only this reference workpiece. |
Teaching Instructions for ETH Scenarios
Operation Procedure
-
Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.
-
Click Get to obtain currently recognized 2D pose of the reference workpiece.
-
Use the teach pendant to move the robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.
-
After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.
Parameter Description
| Parameter | Description |
|---|---|
Select Camera Step |
Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step. |
Reference Workpiece 2D Pose |
Description: 2D pose of reference workpiece recognized during image capture. |
Reference Picking Pose |
Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant. |
Teaching Instructions for EIH Scenarios
Operation Procedure
-
Use teach pendant to move robot to image-capture point. Click Edit and enter flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant.
-
Place the reference workpiece within camera field of view for image capture and recognition, and keep workpiece position unchanged during the whole teaching process.
-
Click Get to obtain currently recognized 2D pose of the reference workpiece.
-
Use teach pendant to move robot precisely to expected picking point of workpiece. Click Edit and enter robot flange pose when picking the reference workpiece. This pose is the robot flange pose read on the teach pendant.
-
After completion, keep workpiece position unchanged and use teach pendant to move robot away from picking point.
Parameter Description
| Parameter | Description |
|---|---|
Select Camera Step |
Description: Select the 2D camera step for which extrinsic calibration has been completed, to ensure calibration data is correctly applied to current step. |
Reference Workpiece 2D Pose |
Description: 2D pose of reference workpiece recognized during image capture. |
Reference Picking Pose |
Description: Robot flange pose when picking reference workpiece. This pose is flange pose in robot coordinate system read from teach pendant. |
Flange Pose at Image Capture |
Description: Flange pose of robot at image-capture point. This pose is flange pose in robot coordinate system read from teach pendant. |
Robot Service Name in Communication Component |
Description: Used to select robot model. It must be consistent with robot model connected in communication component. |
| After teaching is completed, place other workpieces back to carrier and click Run Project again so the system can batch recognize and output poses of all workpieces. |
After completing workpiece-pose calculation, click Next to enter general settings workflow.
General Settings
In this workflow, auxiliary functions outside visual recognition can be configured, including pose-filtering rules and output ports.
Set Pose Filtering Rules
Based on actual requirements and pose data in Recognition Results, set upper and lower limits in X, Y, and Rz directions to filter output workpiece poses and remove results outside configured ranges.
Click Run Step or Run Project to view filtering status.
Configure Output Ports
Select output ports according to actual workpiece requirements. By default, workpiece names and recognized poses are output.
-
Blob Mask: Outputs Blob mask images.
After selecting this port, the corresponding output port is added to the 2D Target Object Recognition step in real time.