Train a High-Quality Model

You are currently viewing the documentation for version 2.4.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section introduces the factors that most affects the model quality and how to train a high-quality image classification models.

Ensure Image Quality

  1. Avoid overexposure, dimming, color distortion, blur, occlusion, etc. These conditions can lead to the loss of features that the deep learning model relies on, which will affect the model training effect.

    improve model accuracy overexposed
    improve model accuracy darker lighting
    improve model accuracy color distortion
    improve model accuracy obscure
    improve model accuracy occluded
  2. Ensure that the background, perspective, and height of the image-capturing process are consistent with the actual application. Any inconsistency can reduce the effect of deep learning in practical applications. In severe cases, data must be re-collected. Please confirm the conditions of the actual application in advance.

    improve model accuracy background inconsistent
    improve model accuracy field mismatch
    improve model accuracy height mismatch
The quality of image classification is sensitive to lighting, and the lighting conditions need to be consistent during collection. If the light is inconsistent in the morning and evening, data needs to be collected separately according to the situation.

Ensure Data Quality

The Classification module obtains a model by learning the features of existing images and applies what is learned to the actual application. Therefore, to train a high-quality model, the conditions of the collected and selected data must be consistent with those of the actual applications.

Collect Data

Various placement conditions need to be properly allocated. For example, if there are horizontal and vertical incoming materials in actual production, but only the data of horizontal incoming materials are collected for training, the classification effect of vertical incoming materials cannot be guaranteed. Therefore, during data collection, it is necessary to consider various conditions of the actual application, including the following:

  • The features presented given different object placement orientations.

  • The features presented given different object placement positions.

    1. Different orientations

      improve model accuracy collection method 1
      improve model accuracy collection method 2
      improve model accuracy collection method 3
    2. Different positions

      improve model accuracy collection method 4
      improve model accuracy collection method 5

Data Collection Examples

  1. A valve tube project: Single object class. Distinguishing between the front and back sides of the valve tubes is needed. Positions are generally fixed with small deviations. Fifteen images for the front and back sides each were collected.

    improve model accuracy project case
  2. An engine valve assembly project: Single object class. Determining whether the object is correctly placed in the slot is needed. Since outside the slot, the object may appear in various positions and orientations, it is necessary to consider different positions and orientations, and 20 images were collected for objects outside the slot. In the slot, only the factor of different positions needs to be considered, so 10 images were collected for objects inside the slot.

    improve model accuracy project case 2
  3. A sheet metal project: Two object classes. Different object sizes need to be recognized. Objects may come in different positions and orientations. Twenty images were collected for the front and back sides each.

    improve model accuracy project case 3A
    improve model accuracy project case 3B

Select the Right Dataset

  1. Control dataset image quantities

    For the first-time model building of the Classification module, capturing 30 images is recommended. It is not true that the larger the number of images the better. Adding a large number of inadequate images in the early stage is not conducive to model improvement later, and will make the training time longer.

  2. Collect representative data

    Image capturing should consider all the conditions in terms of illumination, color, size, etc. of the objects to be recognized.

    • Lighting: Project sites usually have environmental lighting changes, and the data should contain images with different lighting conditions.

    • Color: Objects may come in different colors, and the data should contain images of objects of all the colors.

    • Size: Objects may come in different sizes, and the data should contain images of objects of all existing sizes.

      If the actual on-site objects may be rotated, scaled in images, etc., and the corresponding image data cannot be collected, the data can be supplemented by adjusting the data augmentation training parameters to ensure that all on-site conditions are included in the datasets.
  3. Balance data proportion

    The number of images of different object classes in the datasets should be proportioned according to the actual project; otherwise, the training effect will be affected. There should be no such case where 20 images are of one object, and only 3 are of the other object.

  4. Images should be consistent with the application site

    The factors that need to be consistent include lighting conditions, object features, background, and field of view.

Ensure Labeling Quality

Please ensure consistency, namely that there are no missed or incorrect labels.

improve model accuracy label

Class Activation Maps

After the training of the image classification model is completed, click Generate CAM to generate the class activation maps, and click Show class activation maps (CAM). The class activation maps show the feature regions in the images that are paid attention to when training the model, and they help check the classification performance, thus providing references for optimizing the mode.

improve model accuracy class activatation maps

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.