Information Reading (OCR)

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This page describes the configuration workflow for OCR-based information reading. It is used to recognize printed, engraved, or inkjet characters on target objects and extract key information such as model and batch.

Click Configuration Wizard, select Information Reading, then choose Optical Character Recognition.

Workflow

The complete workflow includes four stages:

information reading process
  1. Image Preprocessing

  2. Pose Alignment

  3. Information Reading

  4. General Settings

Image Preprocessing

Before recognition, you can enable Convert Image Color Space or Image Preprocessing to improve target features.

Convert Image Color Space

Convert input image from one color space to another (for example, BGR to Gray or BGR to HSV) to highlight features for subsequent processing.

For details, see Convert Image Color Space.

Image Preprocessing Parameters

Supports enhancement, denoising, morphology, grayscale inversion, and edge extraction.

For details, see Image Preprocessing.

Preview Preprocessing Result

After configuration, click Run Step or Run Project to preview results, then click Next.

Pose Alignment

After preprocessing, configure pose alignment so target pose in current image is corrected to match template pose.

Add Alignment Settings

Create a parameter group for pose alignment. Multiple groups are supported and independent from each other.

Click Add to create a new group, choose alignment mode, and configure parameters.

add parameter group

Supported modes:

  • No Alignment: Use input image directly without pose correction.

  • 2D Alignment: Align through translation/rotation with edge-based matching. See 2D Alignment.

  • 2D Blob Alignment: Align based on selected Blob centroid and principal axis. See 2D Blob Alignment.

After creating a group, right-click group name (or use action button) to rename, delete, or duplicate.

parameter group management operation

2D Alignment

2D Alignment uses translation and rotation to align target object in input image to template.

Set Recognition Region

Set effective alignment area. Region should fully cover target object with proper margin.

  • Whole Image as Recognition Region: Use entire image.

  • Custom Recognition Region: Manually draw region and ignore unrelated background.

Recognize Target Object

Configure Target Template

After region setup, choose/edit template in 2D template editor by clicking Edit.

Select representative and stable edge features to ensure unique and accurate matching. For details, see 2D Matching Template Editor.

Click Update after each template edit.
Adjust Recognition Parameters

Click Run Step to view matching result and tune parameters if needed.

For details, see 2D Alignment.

Click Next to continue.

2D Blob Alignment

2D Blob Alignment detects blobs, selects target Blob by geometric features, then aligns centroid and principal axis.

Set Recognition Region

Set effective area with sufficient margin. Rectangle and circle region modes are supported, and multiple regions can be mixed.

Recognize Target Object

Tune parameters according to target features.

For details and tuning examples, see 2D Blob Alignment.

View Running Result

Click Run Step or Run Project to inspect result, then click Next.

Information Reading

After alignment, this stage recognizes letters, digits, and symbols through deep-learning OCR.

Capture Images

  1. Ensure image input is connected.

  2. One image is captured automatically when entering the tool. Click Capture Image for additional samples.

Collect diverse samples that cover position/angle, lighting/background, and appearance changes.

Set Target Region

  1. Click Edit to enter ROI configuration.

  2. Set single-character size by adjusting the orange character-size box.

  3. Set target region:

    • Whole image as target region

    • Custom target region (rectangle or annulus)

If annulus is used, set reading direction as clockwise or counterclockwise.

set circular roi
  1. (Optional) Configure masked area using polygon selection.

  2. Click Save and Use.

Model Validation and Optimization

After ROI setup, click Validate.

Set Validation Parameters and Verify Effect

Parameter Description

Validation Result

Displays OCR result. If judgment is enabled, result is shown as OK or NG based on configured rule.

Time Cost

Inference time per sample (ms).

Confidence Threshold

Minimum confidence for accepted OCR character. Lower-confidence recognition is treated as failure.

Character Correction

Constrains first N characters using wildcards: ? any, $ letter, % digit, @ symbol, ! uppercase, & lowercase.

Recognition Targets

Select character types to recognize: uppercase letters, lowercase letters, digits, symbols.

Concatenation Separator

Separator between lines when multiple lines are recognized.

Enable Judgment

Enables pass/fail verification by character count or content rule (manual pattern or global variable).

Incremental Training (Optional)

If OCR quality is not satisfactory, use incremental training:

  • Add recognition content: annotate missed or misrecognized characters and provide correct text.

  • Add exclusion content: annotate false-positive background regions.

Then click Train for retraining, return to validation, and verify improvement.

After validation passes, click Save and Use, then Next.

General Settings

Configure output ports according to production requirements.

Configure Output Ports

Default output is recognized string content. Optional outputs:

  • Judgment result (OK/NG)

  • Recognition status (true/false)

  • Detected image

When selected, corresponding output ports are added to the 2D Target Object Recognition step.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.