Error-Proofing Check (Front/Back or Presence/Absence)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This page describes the configuration workflow for front/back and presence/absence classification. The function is used to determine whether the target object orientation is correct or whether the target object exists, helping prevent missing assembly or reversed assembly.

Click Configuration Wizard, select Error-Proofing Check, then choose Front/Back or Presence/Absence Classification.

Workflow

The complete workflow includes four stages:

error proofing check process
  1. Image Preprocessing: Improve image quality through color conversion, enhancement, denoising, and morphology operations.

  2. Pose Alignment: Align target pose to template to reduce position/angle variation.

  3. Error-Proofing Check: Configure ROI, labeling, and decision rules for automatic OK/NG classification.

  4. General Settings: Configure output ports for production-line integration.

Image Preprocessing

Before recognition, you can enable Convert Image Color Space or Image Preprocessing to improve target features.

Convert Image Color Space

Convert input image from one color space to another (for example, BGR to Gray or BGR to HSV) to highlight features for subsequent processing.

For details, see Convert Image Color Space.

Image Preprocessing Parameters

Supports enhancement, denoising, morphology, grayscale inversion, and edge extraction.

For details, see Image Preprocessing.

Preview Preprocessing Result

After configuration, click Run Step or Run Project to preview results, then click Next.

Pose Alignment

After preprocessing, configure pose alignment so target pose in current image is corrected to match template pose.

Add Alignment Settings

Create a parameter group for pose alignment. Multiple groups are supported and independent from each other.

Click Add to create a new group, choose alignment mode, and configure parameters.

add parameter group

Supported modes:

  • No Alignment: Use input image directly without pose correction.

  • 2D Alignment: Align through translation/rotation with edge-based matching. See 2D Alignment.

  • 2D Blob Alignment: Align based on selected Blob centroid and principal axis. See 2D Blob Alignment.

After creating a group, right-click group name (or use action button) to rename, delete, or duplicate.

parameter group management operation

2D Alignment

2D Alignment uses translation and rotation to align target object in input image to template.

Set Recognition Region

Set effective alignment area. Region should fully cover target object with proper margin.

  • Whole Image as Recognition Region: Use entire image.

  • Custom Recognition Region: Manually draw region and ignore unrelated background.

Recognize Target Object

Configure Target Template

After region setup, choose/edit template in 2D template editor by clicking Edit.

Select representative and stable edge features to ensure unique and accurate matching. For details, see 2D Matching Template Editor.

Click Update after each template edit.
Adjust Recognition Parameters

Click Run Step to view matching result and tune parameters if needed.

For details, see 2D Alignment.

Click Next to continue.

2D Blob Alignment

2D Blob Alignment detects blobs, selects target Blob by geometric features, then aligns centroid and principal axis.

Set Recognition Region

Set effective area with sufficient margin. Rectangle and circle region modes are supported, and multiple regions can be mixed.

Recognize Target Object

Tune parameters according to target features.

For details and tuning examples, see 2D Blob Alignment.

View Running Result

Click Run Step or Run Project to inspect result, then click Next.

Error-Proofing Check

After alignment, configure ROI, labeling, and judgment rules for automatic OK/NG classification.

Capture Images

  1. Ensure image input is connected.

  2. One image is captured automatically when entering the tool. Click Capture Image to capture more images.

Use diverse samples that cover position/angle changes, lighting/background changes, and appearance variations (for example, deformation, stain, scratch, batch color difference).

Set Target ROI

  1. Click Edit to enter ROI configuration.

  2. Draw ROI (rectangle or circle) to cover target features and avoid unrelated background.

  3. (Optional) Configure masked area to exclude glare, shadow, or fixed interference.

  4. Click Save and Use.

Label Images

  1. Click Edit to enter labeling.

  2. Select each ROI and label as OK or NG.

  3. Continue capture + labeling until both classes are sufficiently covered.

  4. Click Save and Use.

Train and Validate Model

  1. Click Train and wait for completion.

  2. Click Validate to verify results and key parameters:

Parameter Description

Validation Result

Shows OK or NG classification result.

Time Cost

Inference time for a single sample (ms).

Confidence Threshold

Minimum confidence for classifying as OK. Values below threshold are classified as NG.

After validation passes, click Next to continue.

General Settings

Configure auxiliary settings such as output ports.

Configure Output Ports

Select output ports as needed:

  • Classification judgment result (OK/NG)

  • Classification status (true/false)

  • Detected image

When selected, corresponding output ports are added automatically to the step.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.