Merge Point Cloud
For target objects with different shapes on the front and back sides, the camera can only capture the point cloud of one side. This point cloud is then used to generate a point cloud model for further pick point configuration.
In actual production, if both sides of the target object need to be pickable, the separately configured models of the front and back sides should be merged.
Using a cast iron part as an example, the following section explains the merging process.
Merge Point Cloud
-
Using the Get point cloud by camera workflow, you can generate point cloud models for both the front and back sides of the cast iron part, and then manually configure pick points for each model.
The point cloud models and pick point configurations for both sides of the cast iron part are shown in the figure below.
-
Merge the target object.
Click the
icon to the right of the target object list, and select Merge target objects. Select the main target object and secondary target object in the pop-up window, and click OK to create a new target object in the list.
Once the target object is merged, click Next to edit the merged point cloud model.
Edit Point Cloud Model
Edit Point Cloud
If there are interference points around the point cloud model, you can remove the interference points by editing the point cloud. Refer to Edit Point Cloud for detailed instructions.
Calibrate Object Center Point
After an object center point is automatically calculated, you can calibrate it based on the actual target object in use. Select a calculation method under Calibrate center point by application, and click Start calculating to calibrate the object center point.
Method | Description | Operation | Applicable target object |
---|---|---|---|
Re-calculate by using original center point |
The default calculation method. Calculate the object center point according to the features of the target object and the original object center point. |
Select Re-calculate by using original center point, and click the Start calculating button. |
|
Calibrate to center of symmetry |
Calculate the object center point according to the target object’s symmetry.
|
Select Calibrate to center of symmetry and click the Start calculating button. |
Symmetrical target objects
|
Calibrate to center of feature |
Calculate the object center point according to the selected Feature type and the set 3D ROI. |
|
Target objects with obvious geometric features
|
Configure Point Cloud Model
To better use the point cloud model in the subsequent 3D matching and enhance matching accuracy, the tool provides the following two options for configuring the point cloud model. You can enable the Configure point cloud model feature as needed.
Calculate Poses to Filter Matching Result
Once Calculate poses to filter matching result is enabled, more matching attempts will be made based on the settings to obtain matching results with higher confidence. However, more matching attempts will lead to longer processing time.
Two methods are available: Auto-calculate unlikely poses and Configure symmetry manually. In general, Auto-calculate unlikely poses is recommended. See the following for details.
Method | Description | Operation |
---|---|---|
Auto-calculate unlikely poses |
Poses that may cause false matches will be calculated automatically. In subsequent matches, poses that successfully match these poses will be considered unqualified and filtered out. |
|
Configure symmetry manually |
For rotationally symmetric target objects, configuring the rotational symmetry of the point cloud model can prevent the robot’s end tool from unnecessary rotations when it is holding the target object. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly. |
Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range. |
When this feature is enabled, you should configure the relevant parameters in the subsequent matching Steps to activate the feature. See the following for details.
|
Set Weight Template
During target object recognition, setting a weight template highlights key features of the target object, improving the accuracy of matching results. The weight template is typically used to distinguish target object orientation. The procedures to set a weight template are as follows.
A weight template can only be set when the Point cloud display settings is set to Display surface point cloud only. |
-
Click Edit template.
-
In the visualization area, hold and press the right mouse button to select a part of the features on the target object. The selected part, i.e., the weight template, will be assigned more weight in the matching process.
By holding Shift and the right mouse button together, you can set multiple weighted areas in a single point cloud model.
-
Click Apply to complete setting the weight template.
For the configured weight template to take effect in the subsequent matching, go to the “Model Settings” parameter of the “3D Matching” Step, and select the model with properly set weight template. Then, go to “Pose Filtering” and enable Consider Weight in Result Verification. The “Consider Weight in Result Verification” parameter will appear after the “Parameter Tuning Level” is set to Expert. |
Now the editing of the point cloud model is completed. You can click Next to set the pick point for the point cloud model.
Set Pick Point
Adjust Pick Point
By default, the pick point list displays the added pick points, defined in the reference frame with the object center point as the origin. Changing the object center point will affect the pick points. You can adjust the default pick points or add new pick points.
-
Adjust default pick points
If the automatically generated pick point does not meet the application requirements, you can customize the values in “Pick point settings” or manually drag the pick point in the visualization area.
-
Add new pick points
If the target object has multiple pick points, click the Add button to add new pick points.
Taking square tubes as an example, the magnetic gripper can pick from the sides, ends, and edges. Therefore, you can add pick points at these positions.
After adding pick points, you can drag the pick points in the pick point list to adjust the priority. The points higher in the list will be considered first during actual picking.
Set Pick Point Array
When the target object is symmetrical, you can set the pick point array based on the object center point according to actual requirements. Setting the pick point array can prevent the robot’s end tool from unnecessary rotations during picking. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly. The procedures for setting are as follows.
-
Under “Pick point settings,” click Generate next to Pick point array.
-
Refer to Rotational Symmetry of Target Objects to select the axis of symmetry, and then set the Order of symmetry and Angle range.
-
(Optional) Make vision result contain pick point arrays.
If disabled, Mech-Viz or the path planning tool will generate pick point arrays based on the settings in the target object editor and plan the path according to the pick points in the array.
If enabled, Mech-Vision will output pick point arrays based on the settings in the target object editor, and Mech-Viz or the path planning tool will use the pick points in the array to plan the path.
Taking a round tube as an example, the settings of the pick point array are as follows.

In practice, pick points with a downward Z-axis are often invalid and will affect path planning. Therefore, you should narrow down the Angle range. It is generally recommended to keep the range within ±90°. For example, when configuring a pick point array for randomly placed round tubes, the angle range value is set to ±30° in the figure below.

Add Picking Configuration
Preview Picking Effect
If a tool has been configured in the path planning tool or Mech-Viz, you can enable it in the target object editor to preview the positional relationship between the pick point and the tool during actual picking. This helps determine whether the pick point settings are appropriate. The detailed instructions are as follows.
-
Path Planning Tool
-
Mech-Viz
-
Add an end tool.
Add an end tool and set the TCP in the path planning tool.
-
Preview and enable the tool.
Once the end tool is added, the tool information will be automatically updated in the tool list within the target object editor. You can select a tool from the tool list based on your actual needs and preview the positional relationship between the pick point and the tool in the visualization area during actual picking (as shown in the figure below).
If the tool is modified in the path planning tool, please save the changes in the path planning tool to update the tool list in the target object editor.
-
Ensure the Mech-Viz project is within the current solution.
To ensure that the end tool information in Mech-Viz can be updated in the target object editor, refer to Export Project to Solution to move the Mech-Viz project to the current solution.
-
Add an end tool.
Add an end tool and set the TCP in Mech-Viz.
-
Preview and enable the tool.
Once the end tool is added, the tool information will be automatically updated in the tool list within the target object editor. You can select a tool from the tool list based on your actual needs and preview the positional relationship between the pick point and the tool in the visualization area during actual picking (as shown in the figure below).
If you have modified the tool configurations in Mech-Viz, save the changes in Mech-Viz to update the tool list in the target object editor.
Configure Translational and Rotational Relaxation for Tools
In practice, to ensure the tool can still pick the target object after translating or rotating along a certain axis of the pick point, you can configure the translational relaxation and rotational relaxation for the tool in the target object editor.
Take the round tube as an example, the tool can be translated along the X-axis of the pick point while picking.

The corresponding configuration is shown below.

Click Save to save the configurations for the target object. To set the collision model, click Next.
Set Collision Model (Optional)
Set Collision Model
The collision model is a 3D virtual object used in collision detection for path planning. The tool automatically recommends the collision model generating mode based on the current configuration workflow. The recommended mode for this case is Use STL model to generate point cloud cube. This tool will generate point cloud cubes based on the selected STL model for collision detection. The collision model generated in this method features high accuracy, while the collision detection speed is lower.
-
Select the STL model.
Click Select STL model and then select the STL model used to generate the point cloud cube.
-
Align models.
Aligning the collision model with the point cloud model of the target object ensures effective collision detection. You can click Auto-align point cloud model and collision model or manually adjust the pose of the collision model to achieve the alignment with the point cloud model of the target object.
Configure Symmetry of Held Target Object
Rotational symmetry is the property of the target object that allows it to coincide with itself after rotating a certain angle around its axis of symmetry. When the “Waypoint type” is “Target object pose,” configuring the rotational symmetry can prevent the robot’s tool from unnecessary rotations while handling the target object. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly.
Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range.
Now, the collision model settings are completed. Click Save to save the target object to Solution folder\resource\workobject_library
. Then the target object can be used in subsequent 3D matching Steps.