Use the Unsupervised Segmentation Module

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

Taking the bottom images of bottles (download) as an example, this topic introduces you to how to use the Unsupervised Segmentation module to judge whether there are defects in an image.

You can also use your own data. The usage process is overall the same, but the labeling part is different.
  1. Create a new project and add the Unsupervised Segmentation module: Click New Project after you opened the software, name the project, and select a directory to save the project. Then, click example projects icon create in the upper-right corner and add the Unsupervised Segmentation module.

    If the image backgrounds can bring some inference, you can add an Object Detection module before Unsupervised Segmentation; if the objects in images have different orientations, add a Fast Positioning module before it. See the Use Cascaded Modules section for detailed instructions.
    example projects add project
  2. Import the image data of bottles: Unzip the downloaded data file. Click the Import/Export button in the upper left corner, select Import Folder, and import the image data.

    example projects import images

    If you need to use your own data, you must ensure the image quality. Do not use images in which objects have different shapes and size and are in different positions, or images with changing backgrounds. Such situations will greatly impact model performance. Therefore, it is necessary to ensure the used OK images have small but important differences.

    Most images Incorrect example

    improve model accuracy normal

    object of different size

    different object shape

    different image background

    • When you select Import Dataset, you can only import datasets in the DLKDB format (.dlkdb), which are datasets exported from Mech-DLK.

    • You do not need NG images to train a model, but it is recommended that you include some NG images in the validation set to improve the results produced by models.

  3. Select an ROI: Click the ROI Tool button example projects icon roi and adjust the frame to set an ROI that covers the target objects in the images. Then, click the tools introduction OK button in the lower right corner of the ROI to save the setting. Setting the ROI can avoid interferences from the background and reduce processing time, and the ROI boundary should be as close to the outer contours of the object as possible.

    The same ROI setting will be applied to all images, so it is necessary to ensure that objects in all images are within the set ROI.
    example projects roi
  4. Split the dataset into the training set and validation set: By default, 80% of the images in the dataset will be split into the training set, and the rest 20% will be split into the validation set. You can click example projects icon slider and drag the slider to adjust the proportion.

    example projects move image
  5. Label images: Use OK Label OK label tool or NG Label NG label tool on the toolbar to label images.

    example projects labeling
  6. Train the model: Keep the default training parameter settings and click Train to start training the model.

    If the training set contains images labeled as NG, these images will be automatically put into the validation set for validation during model training.
    example projects training chart
  7. Validate the model: After the training is completed, click Validate to validate the model and check the results.

    example projects result verification
    In the Validation tab, click Adjust thresholds, and in the pop-up dialog box, drag the vertical lines to adjust the thresholds. The green one is used to adjust the threshold of OK results, and the red one is used to adjust the threshold of NG results. Once the thresholds are adjusted, please re-validate the model.
  8. Export the model: Click Export and select a directory to save the trained model.

    example projects model files

The exported model can be used in Mech-Vision and Mech-DLK SDK. Click here to view the details.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.