Error-proofing Check (Deformation Classification)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section describes the workpiece-recognition configuration workflow for deformation classification scenarios. This method is used to detect whether target objects have deformations such as bending, stretching, and indentation, and to identify appearance abnormalities.

Click Configuration Wizard, select the Error-proofing Check scenario, and then select the Deformation Classification mode to enter this workflow.

Workflow

The complete recognition workflow includes four steps:

error proofing check process
  1. Image Preprocessing: Perform preprocessing operations such as color conversion, enhancement, denoising, and morphological transformation on input images to improve image quality, highlight target-object features, reduce background interference, and provide reliable data for subsequent target-object recognition.

  2. Pose Alignment: Set the recognition area and align recognition targets with templates through alignment operations. You can choose suitable correction methods according to target features and flexibly configure parameters to eliminate positional and angular deviations, improving recognition accuracy and reliability.

  3. Error-proofing Check: Based on actual requirements, set target areas for inspection in the aligned image, edit templates of qualified target objects, and configure recognition parameters and decision rules to realize automatic detection and classification of target-object deformation status.

  4. General Settings: Configure output ports to output judgment results and related status information, meeting automated production-line inspection requirements.

Image Preprocessing

Before recognition, you can enable Convert Image Color Space or Image Preprocessing to improve target features.

Convert Image Color Space

Convert input image from one color space to another (for example, BGR to Gray or BGR to HSV) to highlight features for subsequent processing.

For details, see Convert Image Color Space.

Image Preprocessing Parameters

Supports enhancement, denoising, morphology, grayscale inversion, and edge extraction.

For details, see Image Preprocessing.

Preview Preprocessing Result

After configuration, click Run Step or Run Project to preview results, then click Next.

Pose Alignment

After preprocessing, configure pose alignment so target pose in current image is corrected to match template pose.

Add Alignment Settings

Create a parameter group for pose alignment. Multiple groups are supported and independent from each other.

Click Add to create a new group, choose alignment mode, and configure parameters.

add parameter group

Supported modes:

  • No Alignment: Use input image directly without pose correction.

  • 2D Alignment: Align through translation/rotation with edge-based matching. See 2D Alignment.

  • 2D Blob Alignment: Align based on selected Blob centroid and principal axis. See 2D Blob Alignment.

After creating a group, right-click group name (or use action button) to rename, delete, or duplicate.

parameter group management operation

2D Alignment

2D Alignment uses translation and rotation to align target object in input image to template.

Set Recognition Region

Set effective alignment area. Region should fully cover target object with proper margin.

  • Whole Image as Recognition Region: Use entire image.

  • Custom Recognition Region: Manually draw region and ignore unrelated background.

Recognize Target Object

Configure Target Template

After region setup, choose/edit template in 2D template editor by clicking Edit.

Select representative and stable edge features to ensure unique and accurate matching. For details, see 2D Matching Template Editor.

Click Update after each template edit.
Adjust Recognition Parameters

Click Run Step to view matching result and tune parameters if needed.

For details, see 2D Alignment.

Click Next to continue.

2D Blob Alignment

2D Blob Alignment detects blobs, selects target Blob by geometric features, then aligns centroid and principal axis.

Set Recognition Region

Set effective area with sufficient margin. Rectangle and circle region modes are supported, and multiple regions can be mixed.

Recognize Target Object

Tune parameters according to target features.

For details and tuning examples, see 2D Blob Alignment.

View Running Result

Click Run Step or Run Project to inspect result, then click Next.

Error-proofing Check

After image alignment, start deformation inspection. Through 2D template matching, the deformation status of target objects in the image is judged. The system compares target objects with qualified-product templates by features and combines the configured deformation threshold to automatically determine whether target objects have appearance deformation abnormalities.

Set Target Region

First, set the valid inspection range. When drawing regions, fully cover target objects to be inspected and exclude irrelevant background interference. According to actual needs, choose Entire Image as Recognition Region or Custom Recognition Region. If custom recognition is selected, click the Draw button to manually define the region.

  • Entire Image as Recognition Region: Performs recognition on the whole image. This is typically suitable for scenarios where target objects are widely distributed.

  • Custom Recognition Region: Performs recognition only in selected regions. This is typically suitable when only part of an image needs attention or irrelevant areas (such as background and fixtures) should be excluded, helping improve efficiency and accuracy.

Inspect Workpieces

Set Workpiece Template

After setting the target region, create a template using a qualified sample so that target objects captured in real time can be compared with the template to detect differences. Click Edit in the Select Template section to enter the 2D Matching Template Editor.

Representative and stable edge features should be selected from the image to generate the template, ensuring the system can later automatically search and locate target objects whose features match the template, with unique and accurate matching results. For details, refer to 2D Matching Template Editor.

After each template edit, click Update to apply the latest configuration.

Adjust Recognition Parameters

After selecting a template, adjust other parameters based on target-object features and inspection requirements to optimize detection performance.

Parameter Description

Edge Polarity Sensitive

Description: Controls whether edge polarity must match the template during matching. Polarity indicates grayscale transition direction at edges, such as bright-to-dark or dark-to-bright.

Default value: Enabled.

Adjustment instruction: If data-acquisition conditions are consistent, enable this option to ensure matching accuracy; if differences are large, disable it to improve matching generalization.

Minimum Matching Score

Description: Used to determine whether a matching result is valid. Results with matching scores lower than this value are discarded.

Default value: 50.0.

Valid Matching Threshold

Description: In the target image, points with gradient magnitude greater than or equal to this threshold are considered valid edge points and participate in matching-score statistics.

Default value: 10.

Lower Limit of Valid Matching Ratio

Description: Minimum ratio of validly matched edge points to total template edge points.

Default value: 50%.

Search Radius

Description: During pose correction, this is the radius of the circular search region allowed when searching corresponding matching points in the target image for each template feature point.

Default value: 8.

Increase this value appropriately when matching performance is poor.

Upper Limit of Overlap Ratio

Description: Used to filter duplicate matching results. When the overlap ratio between two matching results exceeds this value, only the one with a higher matching score is retained.

Default value: 50%.

Fill Ratio

When the object to be matched may partially exceed image boundaries, this parameter specifies the allowed fill size as a ratio of template size. Filling can improve edge-matching success rate but usually increases computation.

Set this parameter when part of the object to be matched is outside the image. If the ratio of template size outside the image after matching is higher than this value, matching is considered unsuccessful.

Default value: 0%.

Set Judgment Logic

Set judgment criteria used to distinguish qualified products from deformed objects. By configuring judgment parameters, the system can automatically distinguish normal and abnormal objects according to recognition results, enabling automated error-proofing judgment.

Parameter Description

Deformation Threshold

Description: Used to determine whether target-object deformation exceeds the allowable range. If deformation confidence is greater than or equal to this threshold, the result is OK; otherwise, it is NG.

Default value: 0.50.

Adjustment instruction: Configure according to actual conditions.

View Judgment Results

After completing the above parameter settings, click Run Step or Run Project to view judgment results.

Then click Next to enter the general settings workflow.

General Settings

In this workflow, auxiliary functions outside visual recognition can be configured. Currently, output-port configuration is supported.

Configure Output Ports

Select output ports as needed here. By default, the deformation judgment result, OK or NG, is output.

  • Deformation Check: Indicates whether a target object passes deformation inspection. true means pass, and false means fail.

  • Deformation Confidence: Outputs deformation confidence of target objects. Higher confidence indicates smaller deformation.

After selecting relevant ports, corresponding output ports are added to the 2D Target Object Recognition step in real time.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.