Predict Pick Points V2

Function

This Step recognizes the pickable objects based on the 2D images and depth maps and outputs the corresponding pick points.

Usage Scenario

This Step is usually used for sorting different objects heaped or scattered. This Step follows the Scale Image in 2D ROI Step to obtain the information of the scaled depth map, point cloud, and ROI.

Input and Output

By default, no ports will be displayed for this Step. After setting the Picking Configuration Folder Path in the Step parameters panel, different input and output ports will appear according to the picking configuration folders.

When the medicine box picking configuration folder is selected, the input and output ports are shown below.

cable parameters input and output

Prerequisites

Requirement of Graphics Card

This Step requires a graphics card of NVIDIA GTX 1650 Ti or higher to be used.

Instructions for Use

  • Before using this Step, please wait for the deep learning server to start. If the deep learning server is started successfully, a message saying that Deep learning server started successfully at xxx will appear in the log panel, and then you can run the Step.

  • When you run this Step for the first time, please set the Picking Configuration Folder Path in the Step Parameters panel first.

  • When you run this Step for the first time, the deep learning model will be optimized according to the hardware type and the one-time optimization process takes about 15 to 35 minutes. Please wait for a while.

Parameter Description

Server

Server IP

Description: This parameter is used to set the IP address of the deep learning server.

Default value: 127.0.0.1

Tuning recommendation: Please set the parameter according to the actual requirement.

Server Port (1–65535)

Description: This parameter is used to set the port number of the deep learning server.

Default value: 60054

Value range: 60000–65535

Tuning recommendation: Please set the parameter according to the actual requirement.

Inference Setting

Inference Mode

Description: This parameter is used to select the inference mode for deep learning.

Value list: GPU, CPU

  • GPU: Use GPU for deep learning model inference after optimizing the model. The inference speed is relatively fast. It takes about 10 to 30 minutes to optimize the model for the first time.

  • CPU: Use CPU for deep learning model inference, which will increase the inference time and reduce the recognition accuracy compared with GPU.

Default value: GPU

Tuning recommendation: GPU inference is faster than CPU. Please restart the deep learning server after switching the inference mode.

Picking Configuration

Picking Configuration Folder Path

Description: This parameter is used to select the path where the picking configuration folder is stored. After setting the Picking Configuration Folder Path, different input and output ports will appear according to different picking configuration folders.

Tuning recommendation: Please set the Picking Configuration Folder Path according to the actual requirement. The Picking Configuration Folder used in the medicine boxes scenario is provided, as shown below.

Usage Scenario Picking Configuration Folder Name Request Parameter Description

Medicine Boxes

MedicineBox_Instance_3DSize_RGBSuction

The Picking Configuration Folder can be downloaded along with the “Boxes” solutions in the solution library.

Parameter Description for the Medicine Boxes Scenario

There are two JSON files and one model folder in the picking configuration folder. The deep learning model is stored in the model folder. The folder path should NOT contain the model folder, or else this Step cannot function properly.

For example, a correct path can be D:/ConfigurationFiles/MedicineBox_Instance_3DSize_RGBSuction.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.