Carton Locating

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

Before using this tutorial, you should have created a Mech-Vision solution using the Single-Case Cartons case project in the Hand-Eye Calibration section.

In this tutorial, you will first learn about the project workflow, and then how to deploy the project by adjusting Step parameters, to recognize the cartons’ poses and output the vision result.

Video tutorial: Carton Locating

Introduction to the Project Workflow

The following table describes each Procedure in the project workflow.

No. Step/Procedure Image Description

1

Capture Images from Camera

vision project workflow introduction 1

Connect to the camera and capture images of cartons

2

Point Cloud Preprocessing and Get the Mask of the Highest Layer

vision project workflow introduction 2

Perform preprocessing of the cartons’ point cloud and get the mask of the highest layer

3

Segment Masks of Individual Cartons Using Deep Learning

vision project workflow introduction 3

Use deep learning inference to segment masks of individual cartons based on the input mask of cartons on the highest layer, which facilitates obtaining the point cloud of an individual carton based on the corresponding mask

4

Calculate Carton Poses

vision project workflow introduction 4

Recognize cartons’ poses, and verify or adjust the recognition results based on the input carton dimensions

5

Adjust Poses V2

vision project workflow introduction 5

Transform the reference frame of carton poses and sort the poses of multiple cartons by rows and columns

6

Output

vision project workflow introduction 6

Output the cartons’ poses for the robot to pick

Adjust Step Parameters

In this section, you will deploy the project by adjusting the parameters of each Step or Procedure.

Capture Images from Camera

The Single-Case Cartons case project contains virtual data. Therefore, you need to disable the Virtual Mode in the Capture Images from Camera Step and connect to the real camera.

  1. Select the Capture Images from Camera Step, disable the Virtual Mode option, and click Select camera on the Step parameters tab.

    vision project click select camera
  2. In the prompted window, click image on the right of the desired camera’s serial No. to connect the camera. After the camera is connected successfully, the image icon will turn into image.

    image

    After the camera is connected, select the parameter group. Click the Select parameter group button and select the calibrated parameter group with ETH/EIH and date.

    image
  3. After the camera is connected and the parameter group is selected, the calibration parameter group, IP address, and ports of the camera will be obtained automatically. Just keep the default settings of the other parameters.

    image

Now the camera is successfully connected.

Point Cloud Preprocessing & Get the Mask of the Highest Layer

To prevent the robot from colliding with other cartons while picking items from the non-highest layer, it is necessary to use this Procedure to obtain the mask of the cartons on the highest layer. By giving priority to picking these cartons, you can minimize the risk of collisions during the picking process.

Set 3D ROI

  1. In the Point Cloud Preprocessing & Get the Mask of the Highest Layer Procedure, click the Open the editor button in the Step Parameters tab to open the Set 3D ROI window.

    vision project open 3d roi editor
  2. In the Set 3D ROI window, drag the default generated 3D ROI in the point cloud display area to a proper position. Make sure that the highest and lowest areas of the turnover box stack are within the green box at the same time, and that the green box does not contain other interfering point clouds, as shown in the following figure.

    image

Set Carton Dimensions

In the Point Cloud Preprocessing & Get the Mask of the Highest Layer Procedure, fill in the Box Length, Box Width, and Box Height in sequence.

image
This Procedure extracts the point clouds of the cartons on the highest layer according to the height of these cartons. If the Box Height you set is greater than that, it will lead to an error in the point cloud extraction of the cartons on the highest layer.

Segment Masks of Individual Cartons Using Deep Learning

After obtaining the mask of the cartons on the highest layer, you need to use deep learning to segment the masks of individual cartons.

  1. In the Segment Masks of Individual Cartons Using Deep Learning Procedure, click the Open the editor button in the Step Parameters panel to open the Set ROI window.

    vision project open dl editor
  2. Set the 2D ROI in the Set ROI window. The 2D ROI needs to cover the highest-layer sacks, leaving an appropriate margin of about one-third.

    vision project dl roi
  3. The current case project has a built-in instance segmentation model package suitable for cartons. After running this Procedure, you will get the masks of individual cartons, as shown below.

    image

If the segmentation results are not satisfactory, you can adjust the size of the 2D ROI accordingly.

Calculate Carton Poses

After obtaining the point clouds of individual cartons, you can calculate carton poses. In addition, you can enter the dimensions of the carton to verify the correctness of the recognition results.

The Calculate Carton Poses Procedure is used to calculate the poses and dimensions of cartons. There is no need to set parameters for this Procedure.

Adjust Poses V2

The Adjust Poses V2 Step is used to transform carton poses from the camera reference frame to the robot reference frame, adjust pose orientations, sort poses, and filter unqualified poses. There is no need to set parameters for this Procedure.

Procedure Out

After obtaining the proper carton poses, the Procedure Out Step sends the results of the current project to the backend service.

So far, you have deployed the project for carton locating in Mech-Vision.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.