FAQ

You are currently viewing the documentation for version 2.5.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

How to troubleshoot the reason why a defect segmentation model is not effective?
  1. Check the labels for errors.

  2. Check that all kinds of defects are included in the training set.

  3. Check that the input image size is reasonable. If the defect is too small it may not be effective for training the model.

When to use the Defect Segmentation module and the Unsupervised Segmentation module?

Generally speaking, both modules can recognize the defect areas in images, but they remain quite different from each other. You may want to choose one by the following considerations:

  • The Defect Segmentation module aims to segment the defects; in other words, a high requirement of accuracy is put on this module in the position, shape, and size of defects. The Unsupervised Segmentation module, however, is designed to judge whether there is any defect in an image and display the possible areas with defects for NG images.

  • For the former, all types of defects should be labeled in the labeling process. The latter, however, poses no such requirement as defect labeling, and NG images are unnecessary for model training, namely that only OK images will be used as the training set.

  • The latter can only show a rough defect area of an image and cannot finely segment a defect. If you want to segment defects in an image, please use the Defect Segmentation module.

Does it work if I simulate changes in lighting conditions during data collection by manually adjusting the camera exposure or adding supplemental light?

No. Simulated lighting conditions may not reflect the actual conditions accurately, and thus image data collected under such conditions cannot provide accurate object features to train the model. Therefore, if the lighting conditions on-site change over the day, please collect image data respectively under different conditions.

The camera is fixed, and the incoming objects’ positions vary slightly. Does it work if I simulate the position changes of the objects by moving the camera during data collection?

No. The camera should be fixed in position before any data collection. Moving the camera during data collection will affect the extrinsic parameters of the camera and the training effect. Setting a larger ROI can help fully capture the changes in object position.

If the previously used camera has unsatisfactory imaging quality and is replaced by a new camera, is it necessary to add the images taken by the old camera to the dataset?

No. After camera replacement, all data used for model training should come from the new camera. Please conduct data collection again using the new camera and use the data for training.

Will changing the background affect model performance?

Yes. Changing the background will lead to recognition errors, such as false recognition or failure to recognize a target object. Therefore, once the background is set in the early stage of data collection, it is best not to change the background afterward.

Does it work if I use the image data collected with different camera models at different heights together to train one model?

Yes, but please work on the ROI settings. Select different ROIs for images taken at different heights to reduce the differences among images.

For highly reflective metal parts, what factors should I consider during data collection?

Please avoid overexposure and underexposure. If overexposure in parts of the image is inevitable, make sure the contour of the object is clear.

If the model performs poorly, how to identify the possible reasons?

Factors to consider: quantity and quality of the training data, data diversity, on-site ROI parameters, and on-site lighting conditions.

  1. Quantity: whether the quantity of training data is enough to make the model achieve good performance.

  2. Quality: whether the data quality is up to standard, whether images are clear enough and are not over-exposed/under-exposed.

  3. Data diversity: whether the data cover all the situations that may occur on-site.

  4. ROI parameters: whether the ROI parameters for data collection are consistent with those for the actual application.

  5. Lighting conditions: Whether the lighting conditions during the actual application change, and whether the conditions are consistent with those during data collection.

How to improve unstable model performance due to complicated on-site lighting conditions, e.g., objects are covered by shadows?

Please add shading or supplemental light as needed.

Why does the inconsistency between the ROI settings of on-site data and training data affect the confidence values of instance segmentation?

The inconsistency will result in objects being out of the optimal recognition range of the model, thus affecting the confidence. Therefore, please keep the ROI settings of the on-site data and training data consistent.

What are super models for cartons?

Super models (click here to download) are provided for carton palletizing and depalletizing. They can be used directly for most project sites to correctly segment most cartons without collecting additional image data or training.

What scenarios can the super models for cartons be applied to?

It is suitable for palletizing/depalletizing boxes of single or multiple colors and surface patterns. However, please note that this Super Model is only applicable to boxes placed in horizontal layers and not at an angle to the ground.

How to collect data for the Super Model for boxes?

Please test the Super Model first. If it cannot segment correctly sometimes, collect about 20 images of situations where the model does not perform well and train the Super Model.

ROI position deviations may occur when opening old projects with the newer-version Mech-DLK.

The ROI will be corrected after you click Validate.

When training a model in Mech-DLK, what should I do if the error message “ModuleNotFoundError: No module named ‘onnxruntime’” shows?

Go to the “users” folder in OS (C:) and open the folder of the current user of the computer. Check if the folder “AppData/Roaming/Python/Pythong36/site-packages” is empty. If not, please delete all contents in the folder.

Whether AMD CPUs can run CPU models?

AMD CPUs do not support running CPU models.

What are the differences between the classification module and the unsupervised segmentation module?

Both modules can divide images into several classes, but they are different in terms of usage and functions.

  • Data labeling

    • The main function of the classification module is to classify images. Therefore, labeling data of each class is required to train a model.

    • The unsupervised segmentation module requires only the labeling data of OK images. You do not need to label NG images or specific defect types.

  • Implementation method

    • The classification module uses the specified label to classify images. When the module detects NG images, it can only recognize NG images with specific one or more defects. NG images with one type of defects together form a class.

    • The unsupervised segmentation module determines whether an image is OK, NG, or Unknown by using the specified threshold. When the module detects NG images, the unsupervised segmentation module can recognize NG images with multiple defects.

  • Result display

    • The classification module generates results based on the labels that you specified. The images can only be divided into specified classes.

    • The unsupervised segmentation module can divide images into the OK, NG, and Unknown classes. It can also specify the general range of the defects.

Overall, the classification module is used when the number of defect classes are limited, and the unsupervised segmentation module is used to detect NG images and specify the general range of the defects without determining specific defect classes in advance.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.