Carton Locating

Before using this tutorial, you should have created a Mech-Vision solution using the “Single-Case Cartons” case project in the “Hand-Eye Calibration” section.

In this tutorial, you will first learn about the project workflow, and then how to deploy the project by adjusting Step parameters, to recognize the cartons’ poses and output the vision result.

Video tutorial: Carton Locating

Introduction to the Project Workflow

The following table describes each Procedure in the project workflow.

No. Step/Procedure Image Description

1

Capture Images from Camera

vision project workflow introduction 1

Connect to the camera and capture images of cartons

2

Point Cloud Preprocessing & Get the Mask of the Highest Layer

vision project workflow introduction 2

Perform preprocessing of the cartons’ point cloud and get the mask of the highest layer

3

Segment Masks of Individual Cartons Using Deep Learning

vision project workflow introduction 3

Use deep learning inference to segment masks of individual cartons based on the input mask of cartons on the highest layer, which facilitates obtaining the point cloud of an individual carton based on the corresponding mask

4

Calculate Carton Poses

vision project workflow introduction 4

Recognize cartons’ poses, and verify or adjust the recognition results based on the input carton dimensions

5

Adjust poses

vision project workflow introduction 5

Transform the reference frame of carton poses and sort the poses of multiple cartons by rows and columns

6

Output

vision project workflow introduction 6

Output the cartons’ poses for the robot to pick

Adjust Step Parameters

In this section, you will deploy the project by adjusting the parameters of each Step or Procedure.

Capture Images from Camera

The “Single-Case Cartons” case project contains virtual data. Therefore, you need to disable the Virtual Mode and connect to the real camera in the “Capture Images from Camera” Step.

  1. Select the “Capture Images from Camera” Step, disable the Virtual Mode option, and click Select camera on the Step parameters tab.

    vision project click select camera
  2. In the prompted window, click image on the right of the desired camera’s serial No. to connect the camera. After the camera is connected, the icon image will turn into image.

    image

    After the camera is connected, click Select from to select the calibrated parameter group.

    image
  3. After the camera is connected and the parameter group is selected, the calibration parameter group, IP address, and ports of the camera will be obtained automatically. Just keep the default settings of the other parameters.

    image

Now, you have connected the software to the camera.

Point Cloud Preprocessing & Get the Mask of the Highest Layer

To prevent the robot from colliding with other cartons while picking items from the non-highest layer, it is necessary to use this Procedure to obtain the mask of the cartons on the highest layer. By giving priority to picking these cartons, you can minimize the risk of collisions during the picking process.

In this Procedure, yo need to adjust the parameter 3D ROI and Layer Height.

  1. On the Step parameters tab, click the Set 3D ROI button to set the 3D ROI.

    image

    The 3D ROI frame should generally include the highest and lowest regions of the carton stack, and contain unwanted points as less as possible.

  2. To avoid obtaining the cartons not on the highest layer, set the Layer Height parameter. The parameter value should be less than the height of a single carton in the stack, for example, half of the carton height. Usually, you just need to keep the recommended value.

    If the dimensions of single cartons in different stacks are different, you should set the Layer Height parameter according to the height of the lowest cartons.

    If the setting of the Layer Height parameter is improper, the project will obtain the cartons not on the highest layer, which causes the collision between the robot and other cartons during the picking process.

Segment Masks of Individual Cartons Using Deep Learning

After obtaining the mask of the cartons on the highest layer, you need to use deep learning to segment the masks of individual cartons.

The current case project has a built-in instance segmentation model package suitable for cartons. After running this Procedure, you will get the masks of individual cartons, as shown below.

image

If the segmentation results are not satisfactory, you can adjust the size of the 3D ROI accordingly.

Calculate Carton Poses

After obtaining the point clouds of individual cartons, you can calculate carton poses. In addition, you can enter the dimensions of the carton to verify the correctness of the recognition results.

In the “Calculate Carton Poses” Procedure, set the parameter Length on X-axis/Y-axis/Z-axis and Box Dimension Error Tolerance:

  • Length on X-axis/Y-axis/Z-axis: set these parameters according to the actual dimensions of cartons.

  • Box Dimension Error Tolerance: Keep the default value 30 mm. If the input carton dimensions and the recognized ones are significantly different, you can try to adjust this parameter.

Adjust poses

To facilitate robot picking, you need to adjust the cartons’ poses to transform them from the camera reference frame to the robot reference frame after obtaining the cartons’ poses.

In this Procedure, you can also sort the cartons’ poses by rows and columns to help the robot to pick items in a certain sequence.

  • Ascending (by Carton Pose’s X Value in Robot Base Reference Frame): Usually, keep the default setting (selected). When this option is selected, cartons in rows will be sorted in the ascending order of carton poses' X-coordinate values in the robot base reference frame; otherwise, cartons in rows will be sorted in the descending order.

  • Ascending (by Carton Pose’s Y Value in Robot Base Reference Frame): Usually, keep the default setting (selected). When this option is selected, cartons in rows will be sorted in the ascending order of carton poses' X-coordinate values in the robot base reference frame; otherwise, cartons in rows will be sorted in the descending order.

Procedure Out

After obtaining the proper carton poses, the “Procedure Out” Step sends the results of the current project to the backend service.

So far, you have deployed the project for carton locating.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.