Instance Segmentation

You are currently viewing the documentation for the latest version (2.2.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

Function

Use the instance segmentation model package to run inference on the input image. The model package segments the contour of each target object and outputs class labels.

Applicable to scenarios that require accurate recognition and localization of individual objects, such as depalletizing, workpiece loading and unloading, and goods picking.

intro

Input and Output

After the model package is imported into the Deep Learning Model Package Inference Step, the following input and output ports are displayed.

Input

Input ports Data type Description

Image

Image/Color

Image input to this port will be used for deep learning model package inference. Displays when the input data type is 2D image.

Surface data

Surface

Surface data input to this port will be used for deep learning model package inference. Displays when the input data type is Surface data.

Output

Output ports Data type Description

Visualize outputs

Image/Color

Visualized results.

Pixel Masks of Objects

Image/Color/Mask[]

Masks of detected target objects. Regions with non-zero pixel values represent the mask. The mask contour is the contour of the target object. This port is displayed when the input data type is 2D image.

Object bounding box

Shape2D/Contour[]

Bounding box of the detected target object. This port is displayed when the input data type is 2D image.

Bounding box masks of objects

Image/Color/Mask[]

Square mask of the bounding box of the object. Regions with non-zero pixel values represent the mask. This port is displayed when the input data type is 2D image.

Instance surface data

Surface[]

Detected surface data of the target instance. This port is displayed when the input data type is Surface data.

Bounding box inner surface data

Surface[]

The rectangular surface data within the instance bounding box. This port is displayed when the input data type is Surface data.

Object confidence

Number[]

Confidence of detected objects.

Object labels

String[]

Object labels.

Parameter Description

The following parameters need to be adjusted when the instance segmentation model package is imported into this Step.

Model Package Settings

Parameter Description

Model manager tool

Parameter description: This parameter is used to open the deep learning model package management tool and import the deep learning model package. The model package file is a .dlkpack file exported by Mech-DLK.
Tuning instruction: Refer to Mech-DLKDeep Learning Model Package Management ToolMech-DLK for the usage.

Model name

Parameter description: After a Deep Learning Model Package is imported, this parameter is used to select the imported model package for this step.
Tuning instruction: After importing a deep learning model package with the Deep Learning Model Package Management Tool, select the corresponding model package name from the drop-down list.

Release original model package after switching

Description: Controls whether the resources used by the original model package are released upon the switch.
Default setting: Selected.
Instruction: If selected, when the Step switches to another model package, the system immediately releases the resources of the original model package, even if it is still used by other Steps. If not selected, the system releases the resources of the original model package only when it is no longer used by any Step.

Model package type

Parameter description: Once a Model Name is selected, the Model Package Type will be filled automatically.

Input batch size

Description: The number of images processed during each inference.

GPU ID

Parameter description: This parameter is used to select the device ID of the GPU that will be used for the inference.
Tuning instruction: Once you have selected the model name, you can select the GPU ID in the drop-down list of this parameter.

Input data type

Description: This parameter is used to specify the type of input data. The corresponding input ports will be displayed after the parameter is selected. Support 2D image and surface data input.

Preprocessing

Parameter Description

ROI path

Description: This parameter is used to set or modify the ROI of the input image.

Tuning instruction: Once the deep learning model is imported, a default ROI will be applied. If you need to edit the ROI, click the Open the editor button. Edit the ROI in the pop-up Set ROI window, and fill in the ROI name.

Instructions for Setting ROI: Hold down the left mouse button and drag to select an ROI, and then click the left mouse button again to confirm. If you need to re-select the ROI, please click the left mouse button and drag again. The coordinates of the selected ROI will be displayed in the “ROI Properties” section. Click the OK button to save and exit.

Before the inference, please check whether the ROI set here is consistent with the one set in Mech-DLK. If not, the recognition result may be affected.

During the inference, the ROI set during model training, i.e. the default ROI, is usually used. If the position of the object changes in the camera’s field of view, please adjust the ROI.

If you would like to use the default ROI again, please delete the ROI file name below the Open the editor button.

Postprocessing

Parameter Description

Inference configuration

Description: Configures the inference settings for the instance segmentation model package inference. Click Open the editor to open the inference configuration window.

Instruction: Refer to Inference Configuration Tool for detailed parameter description.

When you use Mech-MSR 2.2.0 to open an instance segmentation project created in Mech-MSR 2.1.2 or earlier in which the dilation parameters were configured, you must reconfigure the morphological transformation parameters in the inference configuration tool. Otherwise, the original parameters will not take effect.

Class Display Mode

Description: Selects whether to display classes by name or by index in the output results.

Visualization Settings

Parameter Description

Show obj bounding box

Description: Once enabled, the detection result will be displayed on the image.
Default value: Disabled
Instruction: Set the parameter according to the actual requirement.

Obj bounding box mode

Description: This parameter is used to specify the way to visualize the output results.
Value list: Show each instance, Show instances by class, Show instance center points.
Default setting: Show each instance
Instruction: Set the parameter according to the actual requirement. Refer to the tuning example for the corresponding result.

Customize font size

Description: This parameter determines whether to customize the font size in the visualized outputs. Once this option is selected, you should set the Font Size (0–10). The default value is 1.5.
Default value: Disabled.
Instruction: Set the parameter according to the actual requirement.

Tuning Examples

Obj Bounding Box Mode

Obj bounding box mode Description Illustration

Display each instance

Visualizes each instance with a unique color.

instances sample

Display instances by class

Visualizes instances by class, with instances of the same class sharing the same color.

classes sample

Display instance center points

Visualizes instance center points, with the instance color related to Confidence threshold.

central point sample

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.