Use Camera to Acquire Point Cloud, Generate Point Cloud Model, and Configure Trajectory
In this workflow, you can use camera-acquired point cloud data to generate a point cloud model and create a target object.
|
Before selecting this workflow, make sure the current project contains a "Capture Images from Camera" Step, and that the camera is connected or virtual mode is enabled. |
Under the Get point cloud by camera workflow in target object editor, click Select, and then set the target object name to enter the configuration process. The overall process is shown below.
-
Acquire point cloud: Use the current project to acquire point cloud data, then adjust parameters and set 3D ROI to generate a point cloud model.
-
Edit model: Edit the generated point cloud model, including calibrating the object center point and configuring the point cloud model, for better subsequent 3D matching.
-
Set trajectory: Create and adjust trajectories on the edited point cloud model.
-
Set collision model (optional): Generate a collision model for collision detection during path planning.
The following sections describe the configuration process.
Acquire Point Cloud
After entering the configuration process, you must first acquire point cloud data to generate a point cloud model.
Set Project Information
Select the “Capture Images from Camera” Step in the current project to acquire the point cloud. If the solution contains multiple projects, there may be multiple “Capture Image from Camera” steps. Please select the appropriate one based on your actual needs. Then click Acquire point cloud, and then the result can be viewed in the visualization area.
Note that when the camera’s field of view cannot cover the entire target object, priority should be given to ensuring that the key areas of the object are within the camera’s field of view.
The figure below shows an example of a long sheet metal part. Assuming that the region marked by the red frame on the right is the camera’s FOV, it is recommended to select the point cloud within the region marked by the green frame as the point cloud model for matching stability. Make sure that the region marked by the green frame is within the camera’s FOV when acquiring data. Specifically, the right edge part should be within the camera’s FOV.
Record Robot Flange Pose at Image-Capturing Point
If the camera is mounted in the EIH mode, you can click the Obtain current pose button to obtain the robot flange pose when capturing the point cloud. Please note:
-
The pose refers to the robot flange pose, not the TCP.
-
The pose corresponds to the robot flange pose at the image-capturing point.
-
Carefully check the values to avoid errors.
Preprocess Parameters
To remove interference points and speed up processing in subsequent Steps, you can preprocess the point cloud. For detailed parameter descriptions, refer to Preprocessing Parameters.
|
If the "3D Trajectory Recognition" Step is used in the project, you can enable Use parameters of Step "3D Trajectory Recognition" to automatically synchronize parameter values from that Step, improving 3D matching accuracy. |
Set ROI and Background Removal
To quickly remove irrelevant point clouds in the scene and extract the target object point cloud, you can set an ROI and remove the background.
If you need to remove the background by capturing the image of the background, you must move the target object out of the camera’s view after acquiring the point cloud. Then click Acquire and remove background, and the tool will automatically acquire an image of the background and remove the point cloud of the background.
Now point cloud acquisition is complete. Click Next to edit the generated point cloud model.
Edit Point Cloud Model
After the point cloud model is generated, it should be edited for better 3D matching.
Edit Point Cloud
When generating a point cloud model from camera-acquired point cloud data, ensure that the acquired point cloud accurately represents target object features, and remove interference points. For details, refer to Edit Point Cloud.
The figure below shows the point cloud model of a gearbox housing. The background point cloud below the housing and the cohesive point cloud (in orange) on the side should be removed.
Calibrate Object Center Point
After an object center point is automatically calculated, you can calibrate it based on the actual target object in use. Select a calculation method under Calibrate center point by application, and click Start calculating to calibrate the object center point.
| Method | Description | Operation | Application Scenario |
|---|---|---|---|
Re-calculate by using original center point |
The default calculation method. Calculate the object center point according to the features of the target object and the original object center point. |
Select Re-calculate by using original center point, and click the Start calculating button. |
In general, this method can be used to calculate the center point of all target objects. |
Calibrate to center of symmetry |
Calculate the object center point according to the target object’s symmetry.
|
Select Calibrate to center of symmetry and click the Start calculating button. |
This method can be used to calculate the object center point when filtering matching results by target object symmetry. |
Calibrate to center of feature |
Calculate the object center point according to the selected Feature type and the set 3D ROI. |
|
Target objects with obvious geometric features
|
Reset to Original Point Cloud
During editing, if the current point cloud result is unsatisfactory, click the [Reset button to undo all editing operations and restore the point cloud to its initial state when entering the "Edit model" step.
|
After resetting the point cloud, you need to recalculate the object center point and update the point cloud model configuration. |
Configure Point Cloud Model
To better use the point cloud model in the subsequent 3D matching and enhance matching accuracy, the tool provides the following two options for configuring the point cloud model. You can enable the Configure point cloud model feature as needed.
Calculate Poses to Filter Matching Result
Once Calculate poses to filter matching result is enabled, more matching attempts will be made based on the settings to obtain matching results with higher confidence. However, more matching attempts will lead to longer processing time.
Two methods are available: Auto-calculate unlikely poses and Configure symmetry manually. In general, Auto-calculate unlikely poses is recommended. See the following for details.
| Method | Description | Operation |
|---|---|---|
Auto-calculate unlikely poses |
Poses that may cause false matches will be calculated automatically. During the calculation process, a set of candidate poses is automatically generated based on equivalent or ambiguous poses that may arise due to the target object’s rotational symmetry about the Z-axis. In subsequent matches, poses that successfully match these poses will be considered unqualified and filtered out. |
Note that the calculation results will not be automatically updated when the point cloud model is modified. If there are any modifications, please click "Calculate unlikely poses" again to update the results. |
Configure symmetry manually |
Calculate potentially mismatched poses based on the manually set parameters such as the Order of symmetry and Angle range. In subsequent matches, poses that successfully match these poses will be considered unqualified and filtered out. |
Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range. |
|
After the symmetry is set manually, the symmetry setting of the target object takes effect in the Coarse Matching, Fine Matching, and Extra Fine Matching (if enabled) processes in the 3D Matching Step. |
|
After enabling the features above, if you want them to take effect in subsequent matching, you must configure the corresponding parameters in subsequent matching Steps.
|
Set Weight Template
During target object recognition, setting a weight template highlights key features of the target object, improving the accuracy of matching results. The weight template is typically used to distinguish target object orientation. The procedures to set a weight template are as follows.
|
A weight template can only be set when the Point cloud display settings is set to Display surface point cloud only. |
-
Click Edit template.
-
In the visualization area, hold and press the right mouse button to select a part of the features on the target object. The selected part, i.e., the weight template, will be assigned more weight in the matching process.
By holding Shift and the right mouse button together, you can set multiple weighted areas in a single point cloud model.
-
Click Apply to complete setting the weight template.
|
For the configured weight template to take effect in the subsequent matching, go to the “Model Settings” parameter of the “3D Matching” Step, and select the model with properly set weight template. Then, go to “Pose Filtering” and enable Consider Weight in Result Verification. The “Consider Weight in Result Verification” parameter will appear after the “Parameter Tuning Level” is set to Expert. |
Now point cloud model editing is complete. Click Next to set trajectories for the point cloud model.
Set Trajectory
Create Trajectory
The tool provides two ways to create trajectories: Manual creation and Automatic creation.
In this workflow, Manual creation is recommended for creating trajectories on the point cloud model.
Create Trajectory Manually
In the "Set trajectory" process, click Manual creation to enter the "Manual creation" interface. The detailed procedure is as follows.
-
Pick trajectory points.
Hold Shift, and right-click on the target object to pick trajectory points. The tool automatically connects picked points into a trajectory.
-
Adjust trajectory points.
Created trajectory points are displayed in the list on the right side of the visualization area. If trajectory points do not meet requirements, you can adjust them as follows.
Operation Description Adjust trajectory point position and orientation
Select the trajectory point, and adjust related values in "Trajectory point settings" to change its position and orientation.
Adjust trajectory point order
Drag trajectory points in the list to reorder them.
Add trajectory point
Click Create. The tool adds a new trajectory point after the last trajectory point.
Align trajectory points
Select at least three trajectory points in the list, and click Align. The Z-axis of the selected points will be perpendicular to the fitted plane, and the X-axis will point to the next trajectory point.
Interpolate trajectory points
If picked trajectory points are unevenly distributed, you can interpolate points for more uniform distribution. + Select two trajectory points, set Maximum distance, and click Interpolate. When the distance between the two points exceeds this value, the tool automatically interpolates trajectory points between them and replaces other points between the two selected points.
-
Save trajectory.
After trajectory creation is complete, click Save and apply to save the trajectory.
Create Trajectory Automatically
In the "Set trajectory" process, click Automatic creation to enter the "Automatic creation" interface. The detailed procedure is as follows.
-
Set target area.
Click in sequence, and set the target area in the visualization area by boxing the points where trajectories should be generated.
-
Generate trajectory automatically.
Click Generate trajectory, and the tool will automatically generate trajectories.
-
Adjust trajectory points.
The trajectory list shows the auto-generated trajectories. If needed, you can simplify and smooth trajectories by adjusting the following parameters.
-
Trajectory simplification
You can simplify trajectories by adjusting the following parameters. This reduces the number of trajectory points and simplifies trajectory shape while preserving overall geometry as much as possible. It is suitable for scenarios that require lower trajectory complexity, such as reducing subsequent processing time and improving robot motion efficiency.
Parameter Description Maximum deviation
The maximum allowed deviation when simplifying trajectories. The larger the deviation, the fewer points remain, but trajectory shape may be distorted.
Mandatory retention spacing
If the distance between two points in the original trajectory is greater than this value, both points are retained.
-
Trajectory smoothing
You can smooth trajectories by adjusting the following parameters. Smoothing trajectory points can reduce noise impact and generate smoother trajectories. It is suitable for scenarios with noisy trajectory points, improving smoothness and continuity.
Parameter Description Gaussian sigma
Controls smoothing strength. The larger the value, the smoother the trajectory, but details may be lost.
Smoothing radius
Defines the window size for smoothing, determining the range of neighboring points involved in smoothing calculation.
Distance threshold
Determines whether neighboring points around the point to be smoothed participate in smoothing calculation. If the distance from a neighboring point to the point being smoothed exceeds this value, that neighboring point is excluded. The default value is recommended.
-
-
Save trajectory.
After trajectory creation is complete, click Save and apply to save trajectories.
Adjust Trajectory
After trajectories are created, you can further adjust them.
-
Adjust trajectory line
After creating a trajectory, if you need to offset the trajectory by a certain distance along the Z-axis to better meet practical operation requirements, select a trajectory line in the trajectory list and set Z-axis offset distance to offset that trajectory line along the Z-axis.
-
Adjust trajectory points individually
Select a trajectory point in the trajectory list, and then adjust related values in the parameter settings area to change its position and orientation.
|
For more trajectory adjustment operations, refer to Adjust Trajectory. |
Preview End Tool
After trajectories are created, you can preview the positional relationship between the end tool and trajectories as follows.
-
Ensure the Mech-Viz project is in the current solution.
To ensure target object editor can obtain end tool information from Mech-Viz, refer to Export Project to Solution and move the Mech-Viz project to the current solution.
-
Add an end tool.
In Mech-Viz, add an end tool and set TCP.
-
Preview and enable tool.
After an end tool is added, tool information is automatically updated to the tool list in target object editor. According to actual needs, select a tool in the tool list to preview the positional relationship between trajectory and tool during actual trajectory operation in the visualization area (as shown below), or check the end tool for actual trajectory operation.
If you modified the tool in Mech-Viz, save changes in Mech-Viz to update the tool list in target object editor. In addition, enabling the corresponding end tool for the trajectory in target object editor is required for successful simulation in Mech-Viz.
Set Collision Model (Optional)
The collision model is a 3D virtual object used in collision detection for path planning. You can configure the following settings on the collision model according to the actual situation.
Set Collision Model
The current workflow supports two collision model generating modes. You can select the appropriate mode based on actual requirements. After selection, the tool automatically generates point cloud cubes for collision detection.
| Generating mode | Description | Operation |
|---|---|---|
Use STL model to generate point cloud cube |
Low model accuracy, fast collision detection. |
|
Use point cloud model to generate point cloud cube |
High model accuracy, slow collision detection. |
After selecting this mode, the tool automatically generates a collision model from the current target object’s point cloud model. You can use the "Display collision model" feature to preview the generated collision model. |
Configure Symmetry of Held Target Object
Rotational symmetry is the property of the target object that allows it to coincide with itself after rotating a certain angle around its axis of symmetry. When the “Waypoint type” is “Target object pose”, configuring the rotational symmetry can prevent the robot’s tool from unnecessary rotations while handling the target object. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly.
Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range.
Now collision model setup is complete. Click Save to save the target object to Solution folder\resource\workobject_library, and then use it in subsequent 3D matching Steps.
