Perform Dataset Testing in Batches under Operation Mode

You are currently viewing the documentation for version 2.5.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

Introduction

You can perform batch-wise testing on the dataset in the Operation Mode. This feature is for the Defect Segmentation module and the Classification module. It is unavailable under the cascade mode.

After model training, select Tools  Operation Mode in the menu bar to open the window of Operation Mode.

  • Defect Segmentation: Once data is imported, you can select the automatic check by presetting images as OK or NG, or you can choose to perform a manual check.

  • Classification: Once data is imported, you need to perform a manual check.

After data check, click the Export report button to get a report on accuracy, GPU usage, inference time, etc.

Use the Operation Mode

Defect Segmentation

Steps

  1. Select data source

    Regarding Data Source, you can select Mech-DLK or Folder.

    Mech-DLK

    Use all images in the current project to demonstrate the performance of the trained model.

    Folder

    Use massive new data for model testing. After selecting the image folder path, import all images in the folder as a new dataset. The new dataset imported will be independent of the original datasets in the project.

    If some new images were imported into the current project, in addition to the training set and validation set, these new images will also be imported when you select Mech-DLK as the data source.
  2. Load the model

    Click Load Model. After model loading was completed, click Next to enter the inference interface.

  3. Preset data

    When Preset as OK or Preset as NG is selected, an automatic check will be performed after inference. When Check manually is selected, a manual check should be performed after inference.

  4. Inference

    After presetting, click infer to start the inference. Once inference is completed, relevant data such as inference time and GPU usage will be displayed under Running information.

  5. Perform a manual check (Skip this step if data are preset)

    Perform a manual check on inference results. Click right when the inference result is correct and click wrong when the inference result is incorrect. Upon the check, you can view the statistics of false positives (FPs) and false negatives (FNs) under Running information.

  6. Export the report

    After the data check, you can click Export report to learn about accuracy, FN rate, FP rate, etc.

Running Information

  • Inference time

    Display of Image inference time and Average inference time.

  • Num of FN/FP images

    • When images are preset as OK, the image inferred to be NG will be counted as an FP; when images are preset as NG, the image inferred to be OK will be counted as an FN.

    • During the manual check, if the image inferred to be OK is actually NG, which means the inference result is incorrect, the image will be counted as an FN; if the image inferred to be NG is actually OK, also an incorrect inference result, the image will be counted as an FP.

  • GPU usage

    Display of Current GPU usage.

Classification

Steps

  1. Select data source

    Regarding Data Source, you can select Mech-DLK or Folder.

    Mech-DLK

    Use all images in the current project to demonstrate the performance of the trained model.

    Folder

    Use massive new data for model testing. After selecting the image folder path, import all images in the folder as a new dataset. The new dataset imported will be independent of the original datasets in the project.

    If some new images were imported into the current project, in addition to the training set and validation set, these new images will also be imported when you select Mech-DLK as the data source.
  2. Load the model

    Click Load Model. After loading successfully, click Next to enter the inference interface.

  3. Inference

    Click infer to start the inference. Once inference is completed, relevant data such as inference time and GPU usage will be displayed under Running information.

  4. Perform a manual check

    Perform a manual check on inference results. Click right when the inference result is correct and click wrong when the inference result is incorrect.

  5. Export the report

    After the manual check, you can click Export report to learn about the inference time, GPU usage, accuracy, etc.

Running Information

  • Inference time

    Display of Image inference time and Average inference time.

  • GPU usage

    Display of Current GPU usage.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.