Train a High-Quality Model

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

This section introduces the factors that most affect the model quality and how to train a high-quality object detection model.

Ensure Image Quality

  1. Avoid overexposure, dimming, color distortion, blur, occlusion, etc. These conditions can lead to the loss of features that the deep learning model relies on, which will affect the model training effect.

    improve model accuracy overexposed
    improve model accuracy darker lighting
    improve model accuracy obscure
    improve model accuracy occluded
  2. Ensure that the background, perspective, and height of the image-capturing process are consistent with the actual application. Any inconsistency can reduce the effect of deep learning in practical applications. In severe cases, data must be re-collected. Please confirm the conditions of the actual application in advance.

    improve model accuracy background inconsistent
    improve model accuracy field mismatch
    improve model accuracy height mismatch

Ensure Data Quality

The Object Detection module obtains a model by learning the features of existing images and applies what is learned to the actual application. Therefore, to train a high-quality model, the conditions of the collected and selected data must be consistent with those of the actual applications.

Collect Data

Various placement conditions need to be properly allocated. For example, if there are horizontal and vertical incoming materials in actual production, but only the data of horizontal incoming materials are collected for training, the recognition effect of vertical incoming materials cannot be guaranteed. Therefore, when collecting data, it is necessary to consider various conditions of the actual application, including the following:

  • The features presented given different object placement orientations.

  • The features presented given different object placement positions.

  • The features presented given different positional relationships between objects.

If some situations are not in the datasets, the deep learning model will not go through inadequate learning on the corresponding features, which will cause the model to be unable to effectively make recognitions given such conditions. In this case, data on such conditions must be collected and added to reduce the errors.

Orientations

improve model accuracy different towards

Object positions

improve model accuracy different situations
improve model accuracy different layers

Positional relationships between objects

improve model accuracy positions
improve model accuracy different positions

Data Collection Examples

  1. An object inspection project: The incoming objects are rotors scattered randomly. The project requires accurate detection of all rotor positions. Thirty images were collected.

    • Positions: In the actual application, the rotors may be in any position in the bin, and the quantity will decrease after picking each time.

    • Positional relationships: The rotor may come scattered, neatly placed, or overlapped.

      improve model accuracy different positions 0
  2. A steel bar counting project: The incoming objects are steel bars in bundles. The project requires accurate counting of steel bars. Twenty images were collected.

    • Steel bars have relatively simple features, so only the variations of object positions need to be considered. Images in which steel bars are in any position in the camera’s field of view were captured.

      improve model accuracy different positions 1

Select the Appropriate Data

  1. Control the image quantity of datasets

    For the first-time model building of the Object Detection module, capturing 20 images is recommended. It is not true that the larger the number of images the better. Adding a large number of inadequate images in the early stage is not conducive to model improvement later, and will make the training time longer.

  2. Collect representative data

    Image capturing should consider all the conditions in terms of illumination, color, size, etc. of the objects to be recognized.

    • Lighting: Project sites usually have environmental lighting changes, and the data should contain images with different lighting conditions.

    • Color: Objects may come in different colors, and the data should contain images of objects of all the colors.

    • Size: Objects may come in different sizes, and the data should contain images of objects of all existing sizes.

    If the actual on-site objects may be rotated, scaled in images, etc., and the corresponding images cannot be collected, the data can be supplemented by adjusting the data augmentation training parameters to ensure that all on-site conditions are included in the datasets.
  3. Balance data proportion

    The number of images of different conditions/object classes in the datasets should be proportioned according to the actual project; otherwise, the training effect will be affected. There should be no such case where 20 images are of one object, and only 3 are of the other object, or the case where 40 images are of the objects neatly arranged, and only 5 are of scattered ones.

  4. Images should be consistent with the application site

    The factors that need to be consistent include lighting conditions, object features, background, and field of view.

Ensure Labeling Quality

Labeling quality should be ensured in terms of completeness and accuracy.

  1. Completeness: Label all objects that meet the rules, and avoid missing any objects or object parts.

    improve model accuracy contour missed
  2. Accuracy: Each rectangular selection should contain the entire object. Please avoid missing any object parts, or including excess regions outside the object contours.

    improve model accuracy contour incomplete over

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.