Object-Bin Segmentation

You are currently viewing the documentation for a pre-release version (2.2.0). To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

Function Description

Based on the Object-Bin Segmentation model package, this step segments workpieces and bins from input depth and color images, outputs workpiece and bin masks, and provides visualization results.

Usage Scenario

This step is suitable for scenarios where workpieces and bins need to be effectively separated. It is generally preceded by camera-capture steps and followed by point-cloud extraction steps.

Go to Download Center to obtain the Object-Bin Segmentation deep-learning model package.

Input and Output

Input

Input Port Data Type Description

Camera Depth Image

Image/Depth

Original depth image of objects.

Camera Color Image

Image/Color

Original color image of objects.

Output

Output Port Data Type Description

Visualization Output

Image/Color

Visualization result.

Workpiece Present

Bool

Workpiece detection result of input image. true indicates workpieces exist, and false indicates no workpiece.

Workpiece Mask Image

Image/Color/Mask

Workpiece mask image obtained by segmentation.

Bin Mask Image

Image/Color/Mask

Bin mask image obtained by segmentation.

System Requirements

To use this step, the following system requirements must be met.

  • CPU: Must support AVX2 instruction set, and meet either of the following conditions:

    • Without a discrete GPU: Intel i5-12400 or above.

    • With a discrete GPU: Intel i7-6700 or above, and GPU no lower than GeForce GTX 1660.

    The feature has been fully tested on Intel CPUs and has not yet been tested on AMD CPUs. Intel CPUs are recommended.

  • GPU: Use GeForce GTX 1660 or above (if a discrete GPU is installed).

Parameter Description

Model Package Settings

Model Manager Tool

Parameter description: This parameter is used to open the deep learning model package management tool and import the deep learning model package. The model package file is a “.dlkpack” or “.dlkpackC” file exported from Mech-DLK.

Tuning instruction: Please refer to Deep Learning Model Package Management Tool for the usage.

Model Name

Parameter description: This parameter is used to select the model package that has been imported for this Step.

Tuning instruction: Once you have imported the deep learning model package, you can select the corresponding model name in the drop-down list.

DI Algo Type Translated String

Parameter description: Once a Model Name is selected, the DI Algo Type Translated String will be filled automatically.

GPU ID

Parameter description: This parameter is used to select the device ID of the GPU that will be used for the inference.

Tuning instruction: Once you have selected the model name, you can select the GPU ID in the drop-down list of this parameter.

Pre-Process

ROI Path

Parameter description: This parameter is used to set or modify the ROI.

Tuning instruction: Once the deep learning model is imported, a default ROI will be applied. If you need to edit the ROI, click Open the editor. Edit the ROI in the pop-up Set ROI window, and fill in the ROI name.

Before the inference, please check whether the ROI set here is consistent with the one set in Mech-DLK. If not, the recognition result may be affected.

During the inference, the ROI set during model training, i.e. the default ROI, is usually used. If the position of the object changes in the camera’s field of view, please adjust the ROI.

If you would like to use the default ROI again, please delete the ROI file name below the Open the editor button.

Post-Processing

Parameter Description

Morphological Transformation

Description: When enabled, morphological processing is applied to segmentation results of workpieces and bins.

Default value: Disabled.

Morphological Transformation Type

Description: Used to select morphological post-processing method for masks.

Value list: Dilation, Erosion

  • Dilation: Used to enlarge deep-learning mask area. When the deep-learning mask is smaller than the actual workpiece or bin area, point clouds extracted with masks may be incomplete (especially edge point clouds). In this case, dilation is recommended to enlarge mask area and avoid missing extracted point clouds.

  • Erosion: Used to shrink deep-learning mask area. When the deep-learning mask covers a larger area than the actual workpiece or bin region, or includes background noise, erosion is recommended to shrink mask scope and avoid non-target regions being mixed into point clouds.

Kernel Size

Description: Used to set kernel size of morphological transformation. Larger kernels produce stronger effects.

Default value: 3 px

Adjustment recommendation: Adjust kernel size according to actual requirements.

Visualization Settings

Parameter Description

Draw Segmentation Mask on Image

Description: Overlays and displays segmentation masks on images.

Adjustment instruction: Select this option to enable visualization. Segmentation masks are displayed directly on images. The effect is shown below:

visualization output

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.