Error-Proofing Inspection (Misalignment/Tilt Classification)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section introduces the target object recognition configuration workflow for misalignment/tilt classification scenarios. This method is used to detect whether a target object’s placement position or orientation is abnormal, such as offset, tilt, or incorrect placement.

Click Configuration Wizard, select the Error-Proofing Inspection scenario, and choose Misalignment/Tilt Classification to enter this workflow.

Workflow

The overall recognition workflow includes four steps:

error proofing check process
  1. Image Preprocessing: Performs preprocessing such as color conversion, enhancement, denoising, and morphological transformations on input images to improve image quality, highlight target-object features, and reduce background interference, providing reliable data for subsequent recognition.

  2. Pose Alignment: Sets the recognition region and aligns recognition targets with templates through alignment operations. Appropriate correction methods can be selected based on target features, with flexible parameter settings to eliminate position and angle deviations and improve recognition accuracy and reliability.

  3. Error-Proofing Inspection: Based on actual requirements, sets target regions for inspection in aligned images, edits good-product templates, and configures recognition parameters and judgment rules to automatically detect and classify target-object misalignment/tilt status.

  4. General Settings: Configures output ports to output judgment results and related status information, meeting automated production-line inspection requirements.

Image Preprocessing

Before recognition, you can enable Convert Image Color Space or Image Preprocessing to improve target features.

Convert Image Color Space

Convert input image from one color space to another (for example, BGR to Gray or BGR to HSV) to highlight features for subsequent processing.

For details, see Convert Image Color Space.

Image Preprocessing Parameters

Supports enhancement, denoising, morphology, grayscale inversion, and edge extraction.

For details, see Image Preprocessing.

Preview Preprocessing Result

After configuration, click Run Step or Run Project to preview results, then click Next.

Pose Alignment

After preprocessing, configure pose alignment so target pose in current image is corrected to match template pose.

Add Alignment Settings

Create a parameter group for pose alignment. Multiple groups are supported and independent from each other.

Click Add to create a new group, choose alignment mode, and configure parameters.

add parameter group

Supported modes:

  • No Alignment: Use input image directly without pose correction.

  • 2D Alignment: Align through translation/rotation with edge-based matching. See 2D Alignment.

  • 2D Blob Alignment: Align based on selected Blob centroid and principal axis. See 2D Blob Alignment.

After creating a group, right-click group name (or use action button) to rename, delete, or duplicate.

parameter group management operation

2D Alignment

2D Alignment uses translation and rotation to align target object in input image to template.

Set Recognition Region

Set effective alignment area. Region should fully cover target object with proper margin.

  • Whole Image as Recognition Region: Use entire image.

  • Custom Recognition Region: Manually draw region and ignore unrelated background.

Recognize Target Object

Configure Target Template

After region setup, choose/edit template in 2D template editor by clicking Edit.

Select representative and stable edge features to ensure unique and accurate matching. For details, see 2D Matching Template Editor.

Click Update after each template edit.
Adjust Recognition Parameters

Click Run Step to view matching result and tune parameters if needed.

For details, see 2D Alignment.

Click Next to continue.

2D Blob Alignment

2D Blob Alignment detects blobs, selects target Blob by geometric features, then aligns centroid and principal axis.

Set Recognition Region

Set effective area with sufficient margin. Rectangle and circle region modes are supported, and multiple regions can be mixed.

Recognize Target Object

Tune parameters according to target features.

For details and tuning examples, see 2D Blob Alignment.

View Running Result

Click Run Step or Run Project to inspect result, then click Next.

Error-Proofing Inspection

After image alignment, start misalignment and tilt inspection. Through 2D template matching, pose deviations of target objects in images are detected. By comparing target-object features with templates and combining configured deviation thresholds, the system automatically determines whether abnormalities such as offset, tilt, or incorrect placement exist.

Set Target Region

First, set the valid inspection range. When drawing regions, fully cover target objects to be inspected and exclude irrelevant background interference. According to actual needs, choose Entire Image as Recognition Region or Custom Recognition Region. If custom recognition is selected, click the Draw button to manually define the region.

  • Entire Image as Recognition Region: Performs recognition on the whole image. This is typically suitable for scenarios where target objects are widely distributed.

  • Custom Recognition Region: Performs recognition only in selected regions. This is typically suitable when only part of an image needs attention or irrelevant areas (such as background and fixtures) should be excluded, helping improve efficiency and accuracy.

Inspect Workpieces

Set Workpiece Template

After setting the target region, create a template using a qualified sample so that target objects captured in real time can be compared with the template to detect differences. Click Edit in the Select Template section to enter the 2D Matching Template Editor.

Representative and stable edge features should be selected from the image to generate the template, ensuring the system can later automatically search and locate target objects whose features match the template, with unique and accurate matching results. For details, refer to 2D Matching Template Editor.

After each template edit, click Update to apply the latest configuration.

Adjust Recognition Parameters

After selecting a template, adjust other parameters based on target-object features and inspection requirements to optimize detection performance.

Parameter Description

Edge Polarity Sensitive

Description: Controls whether edge polarity must match the template during matching. Polarity indicates grayscale transition direction at edges, such as bright-to-dark or dark-to-bright.

Default value: Enabled.

Adjustment instruction: If data-acquisition conditions are consistent, enable this option to ensure matching accuracy; if differences are large, disable it to improve matching generalization.

Minimum Matching Score

Description: Used to determine whether a matching result is valid. Results with matching scores lower than this value are discarded.

Default value: 50.0.

Valid Matching Threshold

Description: In the target image, points with gradient magnitude greater than or equal to this threshold are considered valid edge points and participate in matching-score statistics.

Default value: 10.

Lower Limit of Valid Matching Ratio

Description: Minimum ratio of validly matched edge points to total template edge points.

Default value: 50%.

Search Radius

Description: During pose correction, this is the radius of the circular search region allowed when searching corresponding matching points in the target image for each template feature point.

Default value: 8.

Increase this value appropriately when matching performance is poor.

Upper Limit of Overlap Ratio

Description: Used to filter duplicate matching results. When the overlap ratio between two matching results exceeds this value, only the one with a higher matching score is retained.

Default value: 50%.

Fill Ratio

When the object to be matched may partially exceed image boundaries, this parameter specifies the allowed fill size as a ratio of template size. Filling can improve edge-matching success rate but usually increases computation.

Set this parameter when part of the object to be matched is outside the image. If the ratio of template size outside the image after matching is higher than this value, matching is considered unsuccessful.

Default value: 0%.

Set Judgment Logic

Parameter Description

X-Direction Offset Range

Description: Sets the allowable offset of the target-object center point relative to the template center point in the X direction. Only when the offset is within this range is the result judged as OK; otherwise it is judged as NG due to misalignment.

Default value: -100.00 mm to 100.00 mm

Y-Direction Offset Range

Description: Sets the allowable offset of the target-object center point relative to the template center point in the Y direction. Only when the offset is within this range is the result judged as OK; otherwise it is judged as NG due to misalignment.

Default value: -100.00 mm to 100.00 mm

Angle Offset Range

Description: Sets the allowable rotational-angle offset of the target object relative to the template. Only when the offset is within this range is the result judged as OK; otherwise it is judged as NG due to tilt.

Default value: -180.00° to 180.00°

Among all the above judgment conditions, if any one parameter does not meet requirements (that is, exceeds the configured range), the overall judgment result is NG.

View Judgment Results

After completing the above parameter settings, click Run Step or Run Project to view judgment results.

Then click Next to enter the general settings workflow.

General Settings

In this workflow, auxiliary functions outside visual recognition can be configured. Output port configuration is currently supported.

Configure Output Ports

Here, you can select output ports according to actual requirements. Deviation judgment results (OK or NG) are output by default.

  • Deviation Check: Indicates whether the target object passes deviation inspection. 1 means passed, 0 means failed.

  • Angle Deviation: Outputs the rotational-angle deviation of the target object relative to the template.

  • Center Point X Offset: Outputs the X-direction offset of the target-object center point relative to the template center point.

  • Center Point Y Offset: Outputs the Y-direction offset of the target-object center point relative to the template center point.

After selecting relevant ports, the 2D Target Object Recognition step adds corresponding output ports in real time.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.