Carton Locating
Before using this tutorial, you should have created a Mech-Vision solution using the “Single-Case Cartons” case project in the “Hand-Eye Calibration” section.
In this tutorial, you will first learn about the project workflow, and then how to deploy the project by adjusting Step parameters, to recognize the cartons’ poses and output the vision result.
Introduction to the Project Workflow
The following table describes each Procedure in the project workflow.
No. | Step/Procedure | Image | Description |
---|---|---|---|
1 |
Capture Images from Camera |
Connect to the camera and capture images of cartons |
|
2 |
Point Cloud Preprocessing & Get the Mask of the Highest Layer |
Perform preprocessing of the cartons’ point cloud and get the mask of the highest layer |
|
3 |
Segment Masks of Individual Cartons Using Deep Learning |
Use deep learning inference to segment masks of individual cartons based on the input mask of cartons on the highest layer, which facilitates obtaining the point cloud of an individual carton based on the corresponding mask |
|
4 |
Calculate Carton Poses |
Recognize cartons’ poses, and verify or adjust the recognition results based on the input carton dimensions |
|
5 |
Adjust poses |
Transform the reference frame of carton poses and sort the poses of multiple cartons by rows and columns |
|
6 |
Output |
Output the cartons’ poses for the robot to pick |
Adjust Step Parameters
In this section, you will deploy the project by adjusting the parameters of each Step or Procedure.
Capture Images from Camera
The “Single-Case Cartons” case project contains virtual data. Therefore, you need to disable the Virtual Mode and connect to the real camera in the “Capture Images from Camera” Step.
-
Select the “Capture Images from Camera” Step, disable the Virtual Mode option, and click Select camera on the Step parameters tab.
-
In the prompted window, click on the right of the desired camera’s serial No. to connect the camera. After the camera is connected, the icon will turn into .
After the camera is connected, click Select from to select the calibrated parameter group.
-
After the camera is connected and the parameter group is selected, the calibration parameter group, IP address, and ports of the camera will be obtained automatically. Just keep the default settings of the other parameters.
Now, you have connected the software to the camera.
Point Cloud Preprocessing & Get the Mask of the Highest Layer
To prevent the robot from colliding with other cartons while picking items from the non-highest layer, it is necessary to use this Procedure to obtain the mask of the cartons on the highest layer. By giving priority to picking these cartons, you can minimize the risk of collisions during the picking process.
In this Procedure, yo need to adjust the parameter 3D ROI and Layer Height.
-
On the Step parameters tab, click the Set 3D ROI button to set the 3D ROI.
The 3D ROI frame should generally include the highest and lowest regions of the carton stack, and contain unwanted points as less as possible.
-
To avoid obtaining the cartons not on the highest layer, set the Layer Height parameter. The parameter value should be less than the height of a single carton in the stack, for example, half of the carton height. Usually, you just need to keep the recommended value.
If the dimensions of single cartons in different stacks are different, you should set the Layer Height parameter according to the height of the lowest cartons.
If the setting of the Layer Height parameter is improper, the project will obtain the cartons not on the highest layer, which causes the collision between the robot and other cartons during the picking process.
Segment Masks of Individual Cartons Using Deep Learning
After obtaining the mask of the cartons on the highest layer, you need to use deep learning to segment the masks of individual cartons.
The current case project has a built-in instance segmentation model package suitable for cartons. After running this Procedure, you will get the masks of individual cartons, as shown below.
If the segmentation results are not satisfactory, you can adjust the size of the 3D ROI accordingly. |
Calculate Carton Poses
After obtaining the point clouds of individual cartons, you can calculate carton poses. In addition, you can enter the dimensions of the carton to verify the correctness of the recognition results.
In the “Calculate Carton Poses” Procedure, set the parameter Length on X-axis/Y-axis/Z-axis and Box Dimension Error Tolerance:
-
Length on X-axis/Y-axis/Z-axis: set these parameters according to the actual dimensions of cartons.
-
Box Dimension Error Tolerance: Keep the default value 30 mm. If the input carton dimensions and the recognized ones are significantly different, you can try to adjust this parameter.
Adjust poses
To facilitate robot picking, you need to adjust the cartons’ poses to transform them from the camera reference frame to the robot reference frame after obtaining the cartons’ poses.
In this Procedure, you can also sort the cartons’ poses by rows and columns to help the robot to pick items in a certain sequence.
-
Ascending (by Carton Pose’s X Value in Robot Base Reference Frame): Usually, keep the default setting (selected). When this option is selected, cartons in rows will be sorted in the ascending order of carton poses' X-coordinate values in the robot base reference frame; otherwise, cartons in rows will be sorted in the descending order.
-
Ascending (by Carton Pose’s Y Value in Robot Base Reference Frame): Usually, keep the default setting (selected). When this option is selected, cartons in rows will be sorted in the ascending order of carton poses' X-coordinate values in the robot base reference frame; otherwise, cartons in rows will be sorted in the descending order.