Error-proofing Check (2D Template Matching and Counting)
This section describes the workpiece-recognition configuration workflow for the 2D template matching and counting scenario. This method uses 2D template matching to recognize target objects and count them.
Click Configuration Wizard, select the Error-proofing Check scenario, and then select the 2D Template Matching and Counting mode to enter this workflow.
Workflow
The complete recognition workflow includes four steps:
-
Image Preprocessing: Perform preprocessing operations such as color conversion, enhancement, denoising, and morphological transformation on input images to improve image quality, highlight target-object features, reduce background interference, and provide reliable data for subsequent object recognition and counting.
-
Pose Alignment: Set the recognition area and align recognition targets with templates through alignment operations. You can choose suitable correction methods according to target features and flexibly configure parameters to eliminate positional and angular deviations, improving recognition accuracy and reliability.
-
Error-proofing Check: Based on actual requirements, set target areas for inspection in the aligned image, edit target-object templates, and configure recognition parameters and decision rules to realize automatic counting and judgment of target objects.
-
General Settings: Configure output ports to output counting judgment results and related status information to meet automated production-line inspection requirements.
Image Preprocessing
Before recognition, you can enable Convert Image Color Space or Image Preprocessing to improve target features.
Convert Image Color Space
Convert input image from one color space to another (for example, BGR to Gray or BGR to HSV) to highlight features for subsequent processing.
For details, see Convert Image Color Space.
Image Preprocessing Parameters
Supports enhancement, denoising, morphology, grayscale inversion, and edge extraction.
For details, see Image Preprocessing.
Pose Alignment
After preprocessing, configure pose alignment so target pose in current image is corrected to match template pose.
Add Alignment Settings
Create a parameter group for pose alignment. Multiple groups are supported and independent from each other.
Click Add to create a new group, choose alignment mode, and configure parameters.
Supported modes:
-
No Alignment: Use input image directly without pose correction.
-
2D Alignment: Align through translation/rotation with edge-based matching. See 2D Alignment.
-
2D Blob Alignment: Align based on selected Blob centroid and principal axis. See 2D Blob Alignment.
After creating a group, right-click group name (or use action button) to rename, delete, or duplicate.
2D Alignment
2D Alignment uses translation and rotation to align target object in input image to template.
Set Recognition Region
Set effective alignment area. Region should fully cover target object with proper margin.
-
Whole Image as Recognition Region: Use entire image.
-
Custom Recognition Region: Manually draw region and ignore unrelated background.
Recognize Target Object
Configure Target Template
After region setup, choose/edit template in 2D template editor by clicking Edit.
Select representative and stable edge features to ensure unique and accurate matching. For details, see 2D Matching Template Editor.
| Click Update after each template edit. |
Adjust Recognition Parameters
Click Run Step to view matching result and tune parameters if needed.
For details, see 2D Alignment.
Click Next to continue.
2D Blob Alignment
2D Blob Alignment detects blobs, selects target Blob by geometric features, then aligns centroid and principal axis.
Set Recognition Region
Set effective area with sufficient margin. Rectangle and circle region modes are supported, and multiple regions can be mixed.
Recognize Target Object
Tune parameters according to target features.
For details and tuning examples, see 2D Blob Alignment.
Error-proofing Check
After image alignment, start 2D template matching and counting. Through 2D template matching, target objects in the image are recognized and counted. The system compares target objects with the template by features, automatically counts matched targets, and determines whether the result is qualified based on the configured quantity range.
Set Target Region
First, set the valid inspection range. When drawing regions, fully cover target objects to be inspected and exclude irrelevant background interference. According to actual needs, choose Entire Image as Recognition Region or Custom Recognition Region. If custom recognition is selected, click the Draw button to manually define the region.
-
Entire Image as Recognition Region: Performs recognition on the whole image. This is typically suitable for scenarios where target objects are widely distributed.
-
Custom Recognition Region: Performs recognition only in selected regions. This is typically suitable when only part of an image needs attention or irrelevant areas (such as background and fixtures) should be excluded, helping improve efficiency and accuracy.
Inspect Workpieces
Set Workpiece Template
After setting the target region, create a matching template. This template is used to automatically search and locate all target objects to be matched in the image for quantity counting. Click Edit in the Select Template section to enter the 2D Matching Template Editor.
Representative and stable edge features should be selected from the image to generate the template so that matching results are unique and accurate. For details, refer to 2D Matching Template Editor.
| After each template edit, click Update to apply the latest configuration. |
Adjust Recognition Parameters
After selecting a template, adjust other parameters based on target-object features and inspection requirements to optimize detection performance.
| Parameter | Description | ||
|---|---|---|---|
Edge Polarity Sensitive |
Description: Controls whether edge polarity must match the template during matching. Polarity indicates grayscale transition direction at edges, such as bright-to-dark or dark-to-bright. Default value: Enabled. Adjustment instruction: If data-acquisition conditions are consistent, enable this option to ensure matching accuracy; if differences are large, disable it to improve matching generalization. |
||
Minimum Matching Score |
Description: Used to determine whether a matching result is valid. Results with matching scores lower than this value are discarded. Default value: 50.0. |
||
Valid Matching Threshold |
Description: In the target image, points with gradient magnitude greater than or equal to this threshold are considered valid edge points and participate in matching-score statistics. Default value: 10. |
||
Lower Limit of Valid Matching Ratio |
Description: Minimum ratio of validly matched edge points to total template edge points. Default value: 50%. |
||
Search Radius |
Description: During pose correction, this is the radius of the circular search region allowed when searching corresponding matching points in the target image for each template feature point. Default value: 8.
|
||
Upper Limit of Overlap Ratio |
Description: Used to filter duplicate matching results. When the overlap ratio between two matching results exceeds this value, only the one with a higher matching score is retained. Default value: 50%. |
||
Fill Ratio |
When the object to be matched may partially exceed image boundaries, this parameter specifies the allowed fill size as a ratio of template size. Filling can improve edge-matching success rate but usually increases computation.
Default value: 0%. |
General Settings
In this workflow, auxiliary functions outside visual recognition can be configured. Currently, output-port configuration is supported.
Configure Output Ports
Select output ports as needed here. By default, the quantity judgment result (OK or NG) is output to determine whether the number of matching results is within the expected range.
-
Quantity Check: Indicates whether matching quantity meets expectations.
1means it meets expectations, and0means it does not. -
Matching Score: Outputs a list of matching scores for evaluating matching-result quality.
-
Number of Matching Results: Outputs the number of matching results.
After selecting relevant ports, corresponding output ports are added to the 2D Target Object Recognition step in real time.