Predict Pick Points V2

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

Function

This Step recognizes the pick-able objects based on the 2D images and depth maps and outputs the corresponding pick points.

Usage Scenario

This Step is designed for piece picking in logistics, supermarket, and cables industry. This Step follows the Scale Image in 2D ROI Step to obtain the information of the scaled depth map, point cloud, and ROI.

Input and Output

cable parameters input and output

Prerequisites

Requirement of Graphics Card

This Step requires a graphics card of NVIDIA GTX 1650 Ti or higher to be used.

Instructions for Use

Before using this Step, please wait for the deep learning server to start. If the deep learning server is started successfully, a message saying that Deep learning server started successfully at xxx will appear in the log panel, and then you can run the Step.

  • When running this Step for the first time, you should load a Picking Configuration File.

  • When you run this Step for the first time, the deep learning model will be optimized according to the hardware type and the one-time optimization process takes about 10 to 30 minutes depending on the computer configuration. Please wait for a while. After the model is optimized, the execution time of the Step will be greatly reduced.

Parameter Description

Server

Server IP

Description: This parameter is used to set the IP address of the deep learning server.

Default value: 127.0.0.1

Tuning instruction: Please set the parameter according to the actual requirement.

Server Port (1–65535)

Description: This parameter is used to set the port number of the deep learning server.

Default value: 60054

Value range: 60000–65535

Tuning instruction: Please set the parameter according to the actual requirement.

Picking Configuration

Picking Configuration Folder Path

Description: This parameter is used to select the path where the picking configuration folder is stored.

Tuning recommendation: Before you run the project, please load the Picking Configuration Folder first. We provides five types of picking configuration files used for logistics (semantic segmentation), logistics (object detection), supermarket, cables, and pharmaceutical industry, as shown in the table below. Please contact Mech-Mind Technical Support to request the model files you need first.

Usage Scenario Picking Configuration Folder Name

Logistics (semantic segmentation)

Logistics_Seg_RGBSuction

Logistics (object detection)

Logistics_OD_RGBSuction

Supermarket

Supermarket_Seg_RGBSuction

Cables

Cable_Seg_RGBGrasp

Medicine Boxes

MedicineBox_Instance_3DSize_RGBSuction

There are two JSON files and one model folder in the picking configuration folder. The deep learning model is stored in the model folder. The folder path should NOT contain the model folder, or else this Step cannot function properly.

For example, a correct path can be D:/ConfigurationFiles/Cable_Seg_RGBGrasp.

If you are not sure about which type of deep learning model you should use, you can consult Mech-Mind Technical Support for some advice.

Logistics (semantic segmentation)

Please refer to Parameter Adjustment in the Logistics (Semantic Segmentation) Scenario.

Logistics (object detection)

Please refer to Parameter Adjustment in the Logistics (Object Detection) Scenario.

Supermarket

Please refer to Parameter Adjustment in the Supermarket Scenario.

Cables

Please refer to Parameter Adjustment in the Cables Scenario.

Medicine Boxes

Please refer to Parameter Adjustment in the Medicine Boxes Scenario.

It is recommended to use a GeForce GTX 10 Series graphics card with a memory of at least or above 4G when you use the model for the above scenarios. When you run this Step for the first time, the deep learning model will be optimized according to the hardware type and the one-time optimization process takes about 10 to 30 minutes.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.