Train a High-Quality Model

You are currently viewing the documentation for version 2.4.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

This section introduces the factors that most affects the model quality and how to train high-quality instance segmentation models.

Ensure Image Quality

  1. Avoid overexposure, dimming, color distortion, blur, occlusion, etc. These conditions can lead to the loss of features that the deep learning model relies on, which will affect the model training effect.

    improve model accuracy overexposed
    improve model accuracy darker lighting
    improve model accuracy color distortion
    improve model accuracy obscure
    improve model accuracy occluded
  2. Ensure that the background, perspective, and height of the image-capturing process are consistent with the actual application. Any inconsistency can reduce the effect of deep learning in practical applications. In severe cases, data must be re-collected. Please confirm the conditions of the actual application in advance.

    improve model accuracy background inconsistent
    improve model accuracy field mismatch
    improve model accuracy height mismatch

Ensure Data Quality

The Instance Segmentation module obtains a model by learning the features of existing images and applies what is learned to the actual application. Therefore, to train a high-quality model, the conditions of the collected and selected dataset must be consistent with those of the actual applications.

Collect Data

Various placement conditions need to be properly allocated. For example, if there are horizontal and vertical incoming materials in actual production, but only the data of horizontal incoming materials are collected for training, the classification effect of vertical incoming materials cannot be guaranteed. Therefore, when collecting data, it is necessary to consider various conditions of the actual application, including the following:

  • Ensure that the collected dataset includes all possible object placement orientations in actual applications.

  • Ensure that the collected dataset includes all possible object positions in actual applications.

  • Ensure that the collected dataset includes all possible positional relationships between objects in actual applications.

If any of the representations among the above three is missed in the dataset, the deep learning model will not be able to learn the features properly and therefore cannot recognize the objects correctly. A dataset with sufficient samples will reduce errors.

Object placement orientations

improve model accuracy different towards

Object positions

improve model accuracy different situations
improve model accuracy different layers

Positional relationships between objects

improve model accuracy positions
improve model accuracy different positions

Data Collection Examples

  1. A metal piece project involves objects of a single class, and thus 50 images were collected. Object placement conditions of lying down and standing on the side need to be considered. Object positions at the bin center, edges, corners, and at different heights need to be considered. Object positional relationships of overlapping and parallel arrangement need to be considered. Samples of the collected images are as follows:

    improve model accuracy metal part placement status
    improve model accuracy metal part poses
  2. A grocery project involves seven classes of mixing objects, which requires classification. The objects of one class placed in different orientations and mixing objects of multiple classes need to be considered to fully capture object features. Number of images for objects of one class = 5 × number of object classes. Number of images for mixing objects of multiple classes = 20 × number of object classes. The objects may come lying flat, standing on sides, or reclining, so images containing all faces of the objects need to be considered. The objects may be in the center, on the edges, and in the corners of the bins. The objects may be placed parallelly or fitted together. Samples of the collected images are as follows:

    • Placed alone

      improve model accuracy singel class subject positions
    • Mixedly placed

      improve model accuracy mix classes subject positions
  3. A track shoe project involves track shoes of many models, and thus the number of images captured was 30 multiplied by the number of models. The track shoes only face up, so only the facing-up condition needs to be considered. They may be on different heights under the camera. In addition, they are arranged regularly together, so the situation of closely fitting together needs to be considered. Samples of the collected images are as follows:

    improve model accuracy different layer
  4. A metal piece project involves metal pieces presented in one layer only, and thus, only 50 images were captured. The metal pieces only face up. They are in the center, edges, and corners of the bin. In addition, they may be fitted closely together. Samples of the collected images are as follows:

    improve model accuracy different situation
  5. A metal piece project involves metal pieces neatly placed in multiple layers, and thus, 30 images were collected. The metal pieces only face up. They are in the center, edges, and corners of the bin and are on different heights under the camera. In addition, they may be fitted closely together. Samples of the collected images are as follows:

    improve model accuracy different layers positions

Select the Appropriate Data

  1. Control dataset image quantities

    For the first-time model building of the Instance Segmentation module, capturing 30–50 images is recommended. It is not true that the larger the number of images the better. Adding a large number of inadequate images in the early stage is not conducive to model improvement later, and will make the training time longer.

  2. Collect representative data

    Image capturing should consider all the conditions in terms of illumination, color, size, etc. of the objects to be recognized.

    • Lighting: Project sites usually have environmental lighting changes, and the data should contain images with different lighting conditions.

    • Color: Objects may come in different colors, and the data should contain images of objects of all the colors.

    • Size: Objects may come in different sizes, and the data should contain images of objects of all existing sizes.

      If the actual on-site objects may be rotated, scaled in images, etc., and the corresponding images cannot be collected, the data can be supplemented by adjusting the data augmentation training parameters to ensure that all on-site conditions are included in the datasets.
  3. Balance data proportion

    The number of images of different object classes in the datasets should be proportioned according to the actual project; otherwise, the training effect will be affected. There should be no such case where 20 images are of one object, and only 3 are of the other object.

  4. Images should be consistent with the application site

    The factors that need to be consistent include lighting conditions, object features, background, and field of view.

Ensure Labeling Quality

Determine the Labeling Method

  1. Label the upper surface’ contour: It is suitable for regular objects that are laid flat, such as cartons, medicine boxes, rectangular workpieces, etc. For these objects, the pick points are calculated on the upper surface contour, and the user only needs to make rectangular selections on the images.

    improve model accuracy 2 label upper surface
  2. Label the entire objects’ contours: It is suitable for sacks, various types of workpieces, etc., for which only labeling the object contours is the general method.

    improve model accuracy 3 label outer contour
  3. Special cases: for example, when the recognition result needs to conform to how the grippers work.

    • It is necessary to ensure that the suction cup and the tip of the bottle to pick completely fit (high precision is required), and only the bottle tip contours need to be labeled.

      improve model accuracy 4 label bottle mouth
    • The task of rotor picking involves recognizing rotor orientations. Only the middle parts whose orientations are clear can be labeled, and the thin rods at both ends cannot be labeled.

      improve model accuracy 5 label the middle part of the rotor
    • It is necessary to ensure that the suction parts are in the middle parts of the metal pieces, so only the middle parts of the metal pieces are labeled, and the ends do not need to be labeled.

      improve model accuracy 6 label the middle part

Check Labeling Quality

The labeling quality should be ensured in terms of completeness, correctness, consistency, and accuracy:

  1. Completeness: Label all objects that meet the rules, and avoid missing any objects or object parts.

    improve model accuracy contour missed
  2. Correctness: Make sure that each object corresponds correctly to the label it belongs to, and avoid situations where the object does not match the label.

    improve model accuracy label name not correspond
  3. Consistency: All data should follow the same labeling rules. For example, if a labeling rule stipulates that only objects that are over 85% exposed in the images be labeled, then all objects that meet the rule should be labeled. Please avoid situations where one object is labeled but another similar object is not.

    improve model accuracy contour inconsistent
  4. Accuracy: Make the region selection as fine as possible to ensure the selected regions’ contours fit the actual object contours and avoid bluntly covering the defects with coarse large selections or omitting object parts.

    improve model accuracy contour incomplete over

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.