Target Object Selection and Recognition

After point cloud preprocessing, select one or more objects from the target object editor for recognition.

The visualization window of the “Target object selection and recognition” process displays the object center points. If you want to view the pick points, click Next to view them in the visualization window of the “General settings” process.

If an external service is used to trigger the Mech-Vision project to run, it is recommended to close the “3D Target Object Recognition” tool before triggering the project.

If a project is triggered to run when the “3D Target Object Recognition” tool is open, the content in the visualization window and the recognition result below will not be updated after option switching. In this case, close the “3D Target Object Recognition” tool and enable Debug Output. Then, open the “3D Target Object Recognition” tool to switch visualization options and view the updated recognition result and visualized output.

Select Target Object

Follow these tips to update the target objects in the target object editor to the “3D Target Object Recognition” tool, and then select the target object to recognize according to your actual needs.

  • If there are no target objects in the target object editor, select the operation workflow to create a target object according to the actual situation. After target object configuration, click Update target object to update the target object created in the target object editor to the “3D Target Object Recognition” tool.

  • If there are configured target objects in the target object editor, you can simply click Update target object to update the target object to the “3D Target Object Recognition” tool.

Use Deep Learning (Optional)

In practical projects, when the target object to be recognized is made of highly reflective material, there may be missing point cloud data of the object, or when the camera is mounted too far away, the point cloud quality may be poor. In such cases, enabling Assist recognition with deep learning is a good choice for performing object recognition with the help of deep learning.

  • Assist recognition with deep learning can only work for instance segmentation and object detection.

  • In the configuration of model efficiency for instance segmentation and object detection models, it is not supported to send multiple images to the neural network at once. In other words, the “batch size” can only be set to 1.

  1. Import a deep learning model package.

    Click Model package management tool to import a deep learning model package. For detailed instructions, see Import the Deep Learning Model Package.

  2. Select the deep learning model package.

    After the model package is imported, you can select it in the drop-down menu below the button.

  3. Set the ROI (2D).

    Click Set ROI, set the ROI in the pop-up window, and enter the ROI name for deep learning inference.

  4. Configure inference.

    Click Configure inference and set a confidence threshold in the pop-up window. The results with a confidence level above this threshold will be retained during deep learning–assisted recognition.

  5. Set the font size.

    Use this parameter to set the font size of the text displayed in the deep learning result on the left. Set this parameter according to the actual requirement.

  6. Set the dilation parameter (optional).

    This parameter is used to increase the mask area for the deep learning algorithm. When the size of the mask is smaller than that of the target object, there will be defects in the extracted point cloud, especially the edge point cloud. Therefore, it is recommended to enable Dilation to expand the mask to avoid missing data in the extracted point cloud.

    After enabling Dilation, set the Kernel size according to the actual requirement. The larger the kernel size, the stronger the dilation effect.

Recognize Target Object

Basic Mode

Matching mode

Auto-set matching mode

Description: Once this option is enabled, the “Coarse matching mode” and “Fine matching mode” will be automatically set.

Default setting: Enabled

Coarse/Fine matching mode

Description: The two parameters are used to set the matching mode. You only need to set them when Auto-set matching mode is not enabled.

Value list: Surface matching, Edge matching

  • Surface matching: Use the object’s surface point cloud for point cloud model matching.

  • Edge matching: Use the object’s edge point cloud for point cloud model matching.

Default value: Surface matching

Tuning recommendation: Please consider the target object features and the obtained point cloud quality when adjusting this parameter. When the surface of the object has obvious recognizable features (such as crankshafts, rotors, etc.), it is recommended to use surface matching, and you should create a point cloud model that represents the surface features of the object. When the object is relatively flat and shows clear and regular edge features under the camera (such as panels, track shoes, robot links, and brake discs), it is recommended to use edge matching, and you should create a point cloud model that represents the edge features of the object. Meanwhile, if the object point cloud quality is average, it is recommended to use surface matching.

Execution method

Performance mode

Description: This parameter is used to set the tradeoff between accuracy and speed of matching. The higher the accuracy, the longer the time consumed.

Value list: High speed, Standard, and High accuracy

Default value: Standard

Confidence settings

Result verification degree

Description: This parameter is used to select the degree of strictness applied when verifying the matching results.

Value list: Low, Standard, High, and Ultra-high

Default value: Standard

Tuning recommendation: In general, Standard is recommended. When it is difficult to distinguish the point cloud model from the scene point cloud, a higher result verification degree can be selected.

Confidence threshold

Description: If the confidence of the matching result is above the threshold, the matching result is valid. The higher the confidence value is, the more accurate the matching result is.

Default value: 0.3000

Tuning recommendation: It is recommended to set this parameter to the default value and check the running result first. If false recognition occurs, it is recommended to increase this parameter; if a false negative occurs, it is recommended to decrease this parameter.

Output

Max outputs

Description: The parameter specifies the maximum number of output target objects for successful matches. The larger the value, the longer the Step execution time.

Default value: 10

Tuning recommendation: It is recommended to appropriately set the maximum number of output results. Do not set this value too large.

The actual number of output results from 3D matching may not necessarily match the set Max outputs. If the Max outputs is set to 5, but there are actually 3 recognition results from 3D matching, then the final number of output results is 3.

Advanced Mode

Matching mode

Auto-set matching mode

Description: Once this option is enabled, the “Coarse matching mode” and “Fine matching mode” will be automatically set.

Default setting: Enabled

Coarse/Fine matching mode

Description: The two parameters are used to set the matching mode. You only need to set them when Auto-set matching mode is not enabled.

Value list: Surface matching, Edge matching

  • Surface matching: Use the object’s surface point cloud for point cloud model matching.

  • Edge matching: Use the object’s edge point cloud for point cloud model matching.

Default value: Surface matching

Tuning recommendation: Please consider the target object features and the obtained point cloud quality when adjusting this parameter. When the surface of the object has obvious recognizable features (such as crankshafts, rotors, etc.), it is recommended to use surface matching, and you should create a point cloud model that represents the surface features of the object. When the object is relatively flat and shows clear and regular edge features under the camera (such as panels, track shoes, robot links, and brake discs), it is recommended to use edge matching, and you should create a point cloud model that represents the edge features of the object. Meanwhile, if the object point cloud quality is average, it is recommended to use surface matching.

Execution method

Performance mode

Description: This parameter is used to set the tradeoff between accuracy and speed of matching. The higher the accuracy, the longer the time consumed.

Value list: High speed, Standard, and High accuracy

Default value: Standard

Coarse matching settings

Performance mode

Description: This parameter is used to set the tradeoff between accuracy and speed of matching. The higher the accuracy, the longer the time consumed.

Value list: High speed, Standard, High accuracy, Custom

Default value: Standard

Expected point count of model

Description: This parameter is used to specify the expected number of points in the point cloud model. Set this parameter when Performance mode is Custom.

Default value: 300

Fine matching settings

Performance mode

Description: This parameter is used to set the tradeoff between accuracy and speed of matching. The higher the accuracy, the longer the time consumed.

Value list: High speed, Standard, High accuracy, Extra high accuracy, and Custom

Default value: Standard

Sampling interval

Description: The larger the parameter value, the fewer points in the sampled point cloud and the sparser the point cloud. Therefore, the matching is less accurate. The smaller the parameter value, the longer the running time.

Default value: 5.000 mm

Max number of iterations

Description: The larger the value, the higher the matching accuracy, and the slower the processing speed.

Default value: 40

Standard deviation update step number

Description: The parameter is used to fine-tune the standard deviation.

Default value: 3

Deviation correction capacity

Description: This parameter is used to set the intensity of the deviation correction to the matching result from 3D Coarse Matching. The greater the deviation correction capacity is, the more likely the coarsely matched poses can be corrected to the accurately matched poses. Note that an excessive deviation correction capability may lead to a loss of matching accuracy.

Value list: Small, Medium, and Large

Default value: Small

Extra fine matching

Enable extra fine matching

Description: Once enabled, the final matching accuracy may be improved, but the running time will be slightly increased. Enable this option according to the actual situation.

Default value: Disabled

Pose filtering

Use distance-based NMS

Description: After this option is enabled, candidate poses whose distances to the selected poses are less than one-tenth of the object diameter will be filtered out.

Default setting: Enabled

Auto-set max model rotation

Description: Once this parameter is enabled, the Max model rotation angle will be automatically set. This feature is mainly used for filtering the poses that are wrongly matched with the front or back sides of the target object.

Default setting: Enabled

Max model rotation angle

Description: When the point cloud model matches with the scene point cloud, the poses will be filtered by the point cloud model’s rotation angle about its X-axis or Y-axis. When the model’s rotation angle exceeds the Max model rotation angle, the pose will be filtered out.

Default value: 135.00°

Augment long thin objects

Enable augmentation for long thin objects

Description: For long and thin target objects, the object and point cloud are prone to misalignment along the long axis of the object, with the ends unable to align accurately. Enabling this feature can improve the matching accuracy of long and thin target objects.

Default value: Disabled

Avoid false matches

Adjust poses

Description: When Adjust X-axis orientation is selected, the Z-axes of the poses obtained by coarse matching will be fixed, and the X-axes will be rotated to the specified direction. When Filter out unlikely poses is selected, the unlikely poses calculated in the target object editor will be used to assist in matching, thus avoiding false matches.

Value list: None, Adjust X-axis orientation, Filter out unlikely poses

Default setting: None

Tuning recommendation: If you need to use the Filter out unlikely poses parameter, enable the Configure point cloud model function in the Point cloud model configuration area in the target object editor, and then select and configure Auto-calculate unlikely poses. After that, click Update target object in the “3D Target Object Recognition” tool.

Confidence settings

Confidence strategy

Description: This parameter decides the confidence setting method.

Value list: Manual, Auto

  • Auto: Set joint scoring strategy automatically.

  • Manual: Set the joint scoring strategy manually.

Tuning recommendation: Set this parameter to Auto. If the recognition result under “Auto” cannot meet the on-site requirements, then set this parameter to Manual and adjust relevant parameters. After selecting Manual, you can set the Result verification degree and Confidence threshold for surface matching and edge matching according to the actual situation.

Result verification degree

Description: This parameter is used to select the degree of strictness applied when verifying the matching results.

Value list: Low, Standard, High, Ultra-high, Custom

Default value: Standard

Tuning recommendation: In general, “Standard” is recommended. When it is difficult to distinguish the point cloud model from the scene point cloud, a higher result verification degree can be selected.

Search radius

Description: When the distance between scene point cloud and point cloud model is less than this value, it is considered that the scene point cloud and point cloud model coincide. The more points that coincide, the higher the verification score for the matching result. You need to set this parameter when Result Verification Degree is set to Custom.

Default value: 10.000 mm

Sampling interval

Description: This parameter is used for the downsampling of the model and scene point cloud (only for verifying the matching results). The larger the value, the fewer points in the sampled point cloud. You need to set this parameter when Result Verification Degree is set to Custom.

Default value: 5.000 mm

Confidence threshold

Description: If the confidence of the matching result is above the threshold, the matching result is valid. The higher the confidence value is, the more accurate the matching result is.

Default value: 0.3000

Tuning recommendation: It is recommended to set this parameter to the default value and check the running result first. If false recognition occurs, it is recommended to increase this parameter; if a false negative occurs, it is recommended to decrease this parameter.

Consider normal deviation in surface matching

Description: When verifying the surface matching results, consider the angle deviations between the normals of the points in the scene point cloud and their counterparts in the point cloud model. Once this parameter is enabled, the number of output matching results will be fewer, but the accuracy of the matching results will be enhanced.

Default setting: Disabled

Output

Max outputs

Description: The parameter specifies the maximum number of output target objects for successful matches. The larger the value, the longer the Step execution time.

Default value: 10

Tuning recommendation: It is recommended to appropriately set the maximum number of output results. Do not set this value too large.

The actual number of output results from 3D matching may not necessarily match the set Max outputs. If the Max outputs is set to 5, but there are actually 3 recognition results from 3D matching, then the final number of output results is 3.

Remove Coinciding Poses

Remove poses of coinciding objects

Description: This parameter is used to determine whether to enable the feature of removing coinciding objects.

Default setting: Enabled

Coincidence ratio threshold

Description: Set this parameter to determine the threshold of the coincidence ratio between one object and another object. If the coincidence ratio indicating the coincidence of two objects is above this value, the object with a lower pose confidence value will be removed. Set this parameter when Remove poses of coinciding objects is enabled.

Default value: 30%

Remove Overlapped Poses

Remove poses of overlapped objects

Description: This parameter is used to determine whether to enable the feature of removing overlapped objects.

Default setting: Enabled

Overlap ratio threshold

Description: The threshold of the overlap ratio between the object and other objects. If the overlap ratio is above this value, the object will be considered overlapped. Set this parameter when Remove poses of overlapped objects is enabled.

Default value: 30%

View running result

After setting the above parameters, click Run Step or Run project to view the running result.

After target object recognition, click Next to enter the “General settings” process.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.