Target Object Selection and Recognition
After point cloud preprocessing, select one or more objects from the target object editor for recognition.
The visualization window of the “Target object selection and recognition” process displays the object center points. If you want to view the pick points, click Next to view them in the visualization window of the “General settings” process. |
If an external service is used to trigger the Mech-Vision project to run, it is recommended to close the “3D Target Object Recognition” tool before triggering the project. If a project is triggered to run when the “3D Target Object Recognition” tool is open, the content in the visualization window and the recognition result below will not be updated after option switching. In this case, close the “3D Target Object Recognition” tool and enable Debug Output. Then, open the “3D Target Object Recognition” tool to switch visualization options and view the updated recognition result and visualized output. |
Select Target Object
Follow these tips to update the target objects in the target object editor to the “3D Target Object Recognition” tool, and then select the target object to be recognized according to your actual needs.
-
If there are no target objects in the target object editor, select the operation workflow to create a target object according to the actual situation. After target object configuration, click Update target object to update the target object created in the target object editor to the “3D Target Object Recognition” tool.
-
If there are configured target objects in the target object editor, you can simply click Update target object to update the target object to the “3D Target Object Recognition” tool.
Use Deep Learning (Optional)
In practical projects, when the target object to be recognized is made of highly reflective material, there may be missing point cloud data of the object, or when the camera is mounted too far away, the point cloud quality may be poor. In such cases, enabling Assist recognition with deep learning is a good choice for performing object recognition with the help of deep learning.
|
-
Import a deep learning model package.
Click Model package management tool to import a deep learning model package. For detailed instructions, see Import the Deep Learning Model Package.
-
Select the deep learning model package.
After the model package is imported, you can select it in the drop-down menu below the button.
-
Set the ROI (2D).
Click Set ROI, set the ROI in the pop-up window, and enter the ROI name for deep learning inference.
-
Configure inference.
Click Configure inference and set a confidence threshold in the pop-up window. The results with a confidence level above this threshold will be retained during deep learning–assisted recognition.
-
Set the font size.
Use this parameter to set the font size of the text displayed in the deep learning result on the left. Set this parameter according to the actual requirement.
-
Set the dilation parameter (optional).
This parameter is used to increase the mask area for the deep learning algorithm. When the size of the mask is smaller than that of the target object, there will be defects in the extracted point cloud, especially the edge point cloud. Therefore, it is recommended to enable Dilation to expand the mask to avoid missing data in the extracted point cloud.
After enabling Dilation, set the Kernel size according to the actual requirement. The larger the kernel size, the stronger the dilation effect.
Recognize Target Object
-
For more information about the parameters in the Basic mode, please refer to the parameter description in Basic Tuning Level of the “3D Matching” Step.
-
For more information about the parameters in the Advanced mode, please refer to the parameter description in Advanced Tuning Level of the “3D Matching” Step.
View running result
After setting the above parameters, click Run Step or Run project to view the running result.
After target object recognition, click Next to enter the “General settings” process.