Use the Instance Segmentation Module

You are currently viewing the documentation for the latest version (2.6.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

Please click here to download an image dataset of wooden blocks, an example project provided by Mech-DLK. In this topic, we will use an Instance Segmentation module and train a model to segment different types of wooden blocks and export the corresponding class labels.

You can also use your own data. The usage process is overall the same, but the labeling part is different.

Workflow

  1. Create a new project and add the Instance Segmentation module: Click New Project in the interface, name the project, and select a directory to save the project. Click example projects icon create in the upper right corner of the Modules panel and add the Instance Segmentation module.

    example projects instance segmentation
  2. Import the image data of wooden blocks: Unzip the downloaded data file. Click the Import/Export button in the upper left corner, select Import Folder, and import the image folder. The wooden blocks in the images are of four different shapes and colors.

    When you select Import Dataset, you can import datasets in the DLKDB format (.dlkdb) and the COCO format. Click here to download the example dataset.
    example projects import images
  3. Select an ROI: Click the ROI Tool button example projects icon roi and adjust the frame to select the bin containing wooden blocks in the image as an ROI. Then, click the tools introduction OK button in the lower right corner of the ROI to save the setting. Setting the ROI can avoid interferences from the background and reduce processing time.

    example projects roi
  4. Create labels: Select Labeling and click the example projects icon create button in the Classes panel to create labels based on the type or feature of different objects. In this example, the labels are named after the different shapes of the wooden blocks. You can also name the labels according to different colors.

    When you select a label, you can right-click the label and select Merge Into to change the current data type to another type. If you perform the Merge Into operation after you trained the model, it is recommended that you train the model again.
    example projects create labels
  5. Label images: Right-click the example projects icon tool button and select a suitable tool to label the image. In this example project, the contours of the wooden blocks need to be outlined for segmentation. In addition, please make sure that the different shapes of wooden blocks have been labeled correctly. Click here to view how to use labeling tools.

    example projects labeling
  6. Split the dataset into the training set and validation set: By default, 80% of the images in the dataset will be split into the training set, and the rest 20% will be split into the validation set. You can click example projects icon slider and drag the slider to adjust the proportion. Please make sure that both the training set and validation set include objects of all classes to be segmented. If not, select the image name and then right-click it for adjustment.

    example projects move image
  7. Train the model: Keep the default training parameter settings and click Train to start training the model.

    example projects training chart
  8. Validate the model: After the training is completed, click Validate to validate the model and check the results.

    After you validate a model, you can import new image data to the current module and use the pre-trained labeling feature to perform auto-labeling based on this model. For more information, see Pre-trained labeling.

    example projects result verification
  9. Export the model: Click Export. Then, set Max num of inference objects in the pop-up dialog box, click Export, and select a directory to save the exported model.

    The maximum number of inference objects during a round of inference is 25 by default.
    example projects model files

    The exported model can be used in Mech-Vision and Mech-DLK SDK. Click here to view the details.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.