Instructions for Small Non-Planar Workpiece Loading Projects

The procedures for setting up a small non-planar workpiece loading project are shown below:

Create a New Typical Application Project

Create a Project

Click on Typical Applications ‣ New Typical Application Project in the menu bar or New Typical Application Project in the toolbar to open the following window.

../../../../../_images/select_scene_4.png
  1. Select Small Non-Planar Workpieces.

  2. Name the project.

  3. Click on icon_selectfilepath_deploymentguidance to select a folder to save the project (it is recommended to create an empty folder), then click Create Project .

Preparation

Before deploying the project, please complete the following preparation:

1. Calibrate the Camera

Calibrating the camera is to obtain the parameter file of the camera, which is used to determine the spatial relationship between the robot and camera.

Click on Camera Calibration(Standard) in the toolbar to calibrate the camera.

../../../../../_images/standard_calibration1.png

Alternatively, you can click on Camera ‣ Camera Calibration ‣ Standard in the menu bar to open the same window.

../../../../../_images/standard_calibration.png

For more information about camera calibration, please refer to Calibration Procedure.

2. Configure the Camera

Before using the camera to capture images, you need to set the type, parameter group name, IP address and other parameters of the camera. For detailed instructions, please see Capture Images from Camera.

../../../../../_images/camera_configuration1.png

3. Set 2D and 3D ROI

Double click on the Procedure Point Cloud Pre-Processing to display the detailed structure, and set 2D ROI and 3D ROI in the Step From Depth Map to Point Cloud and Extract 3D Points in 3D ROI respectively.

Setting a 2D ROI can increase the deep learning pre-processing speed. Please refer to Instructions for Setting 2D ROI for detailed instructions on setting 2D ROI.

../../../../../_images/set_2droi_small.png

Attention

The way to set a 2D ROI in Step From Depth Map to Point Cloud of the Procedure Point Cloud Pre-Processing is the same as above.

Setting a 3D ROI can extract the point cloud of the target objects and filtered the unwanted points in the backgroung. Please refer to Instructions for Setting 3D ROI for detailed instructions on setting 3D ROI.

../../../../../_images/set_3droi_small.png

Tip

Usually, the small non-planar workpieces are piled randomly in a bin. Setting a 3D ROI is not able to facilitate the algorithms to distinguish the bin from workpieces, and the points in the background cannot be filtered effectively. In these circumstances, Set Static Background for Project may improve the performance of filtering unwanted points.

1. Instance Segmentation

Note

If you already have a super model, you can skip this step and start configuring the deep learning model file.

Instance segmentation is used for detecting and locating each distinct target object in an image, as shown below.

../../../../../_images/segmentation_contrast.png

Please see Instance Segmentation for detailed information about training a deep learning model.

1. Generate a Point Cloud Model

Please use the tool Matching Model and Pick Point Editor to generate a point cloud model of the target object, and therefore Mech-Vision can compare the point cloud of the target object to the point cloud model and then generate an actual picking pose.

The point cloud model and pick point generated using the Matching Model and Pick Point Editor are shown below.

../../../../../_images/generate_point_cloud_model_and_pick_point.png

The generated model file will be saved in the project folder.


Project Deployment

During project deployment phase, you will need to set relevant parameters of Steps, and add the configuration files obtained in the preparation phase to corresponding Steps before actually running the project.

1. Set the DL Model File and Configuration File

  • Double-click on the Procedure Instance Segmentation to display the detailed structure.

  • Select the Step Instance Segmentation and click on icon_selectmodel to set the model file and configuration file in the Parameter, as shown below.

../../../../../_images/add_model_file.png

2. Set the Model File and Geometric Center File

  • Double-click on the Procedure 3D Matching to display the detailed structure.

  • Select the Step 3D Coarse Matching and click on icon_selectmodel to set the model file and geometric center file in the Parameter, as shown below.

../../../../../_images/add_template_file.png
  • The way to set model file and geometric center file in 3D Fine Matching is the same as it is in 3D Coarse Matching.

3. Set Poses Files

../../../../../_images/multi_capture_points.png

Running and Debugging

After completing the project deployment, click on icon_run_button to run the project.

Tip

If you need to save the images or parameters when debugging the project or before training a deep learning model, you can use the tool Data Storage.


After successfully running and debugging the project, if you need to save the on-site data for future reference or you find that certain part of the project is not performing well and would like to optimize the Step or tune the parameters in an off-site situation, the tools Data Storage and Data Playback can be very useful.

Prerequisites for using the tool Data Playback:

  1. A project file in which the project can run correctly without errors.

  2. On-site source data which is gathered during the whole period while running the project, including 2D color images, depth maps, camera parameter file. Please refer to Data Storage for instructions on how to save the data.