3D Target Object Recognition (Neatly Arranged Objects)

This tutorial will show you how to accurately recognize the poses of neatly arranged target objects. Taking the “3D Target Object Recognition (Neatly Arranged Objects)” project as an example, this section explains how to adjust parameters for the 3D Target Object Recognition Step and highlights key considerations for practical application.

Application scenario Recognition result

neatly arranged objects

neatly arranged objects effect

3D Target Object Recognition Tool Application: Neatly Arranged Objects

The following introduce the application guidance to the example project and key considerations for practical application.

Application Guide

In Mech-Vision’s solution library, you can find the “3D Target Object Recognition (Neatly Arranged Objects)” solution under the “3D locating” category of “Hands-on examples” and create the solution with a “3D Target Object Recognition (Neatly Arranged Objects)” project. After that, select the 3D Target Object Recognition Step and then click the Config wizard button in the Step Parameters panel to open the “3D Target Object Recognition” tool and learn how to adjust parameters. The workflow includes three processes, i.e., point cloud preprocessing, target object selection and recognition, and general settings.

overall process
  1. Point cloud preprocessing: Use this process to convert the acquired image data to point clouds, set a valid recognition region, detect edge point clouds, and filter out point clouds that do not meet requirements. This process can help improve the recognition accuracy of the subsequent process.

  2. Target object selection and recognition: After creating the target object model and pick points, decide whether to configure the deep learning model package and adjust the parameters for target object recognition according to the visual recognition strategy in use. Ensure that the configured parameters can meet the operational accuracy requirements so that the object recognition solution can recognize target objects stably and accurately.

  3. General settings: Use this process to configure the output ports. You can choose to output data about pick points or object center points according to the needs of subsequent picking tasks.

The following introduce the key parameters to be adjusted in each process.

Point Cloud Preprocessing

  1. Set the recognition region.

    Set a recognition region (3D ROI). The region should fully cover the target object, with some extra space around the target object in the region.

  2. Adjust parameters.

    In most cases, keep the default values of these parameters. If noise is still prevalent in the scene point cloud, try adjusting the relevant parameters to filter out the noise.

No more parameters need to be adjusted in this example project. You can click the Next button to proceed to the “Target object selection and recognition” process after setting the recognition region.

Target Object Selection and Recognition

After point cloud preprocessing, you need to create a point cloud model of the target object in the target object editor and then set matching parameters for point cloud model matching.

  1. Create a target object model.

    Click the Open target object editor button to open the editor, generate point cloud model based on common 3D shapes. After that, click the Save button to return to the “3D Target Object Recognition” tool interface, then click the Update target object button to select the created target object model, and apply it to recognize the poses of target objects.

    You need to measure the dimensions of the target object in advance. When generating a point cloud model based on common 3D shapes, fill in the Radius and Height of the target object under Set geometric parameters in the pop-up Add Target Object window.
  2. Set parameters related to object recognition.

    The following instructions on parameter adjustment is for reference only. Please adjust each parameter according to the on-site situation.
    • Enable Advanced mode on the right side of Recognize target object.

    • Set the matching mode: The rings present clear, regular edge features in the image captured by the camera, and there is no interference such as partitions and bins from the scene point cloud, and thus both edge matching and surface matching are suitable for this scene. For this project, when the Auto-set matching mode option is enabled, the recognition accuracy may not meet on-site requirements. Therefore, it is recommended to disable this option and manually adjust the relevant parameters.

      Since coarse matching usually allows for some matching errors, set the Coarse matching mode to Edge matching; if the matching accuracy does not meet expectations, such as in cases of angular deviation, set the Fine matching mode to Surface matching to improve the recognition accuracy.

      auto

      manual

      Auto-set matching mode

      Manually set relevant parameters

    • Adjust fine matching settings: To improve recognition accuracy, set the Performance mode to High accuracy. Since the point cloud quality is high, the Deviation correction capacity can be set to Small.

    • Set confidence threshold: Set the Confidence threshold so that the uppermost placed objects can be recognized. In this project, when the Confidence strategy is set to Auto, the recognition accuracy can meet on-site requirements. Therefore, simply set this parameter to Auto and set the Confidence threshold, which defaults to 0.3000.

      In the recognition result section at the bottom of the left-side visualization window, select Output result from the first drop-down menu. Target objects with both Surface matching confidence and Edge matching confidence values exceeding the set threshold will be retained. Please check the recognition result according to the actual situation. If there is a false recognition or false negative situation, raise or lower the threshold, respectively.
    • Max outputs under “Output”: Set this parameter value to the number of target objects when fully stacked. In this project, the Max outputs is set to 25.

Upon the above, click the Next button to go to the general settings page and configure the output ports.

General Settings

After target object recognition, you can configure auxiliary functions other than visual recognition. Currently only configuring port outputs is supported, which can provide vision results and point clouds for subsequent Steps.

Since the subsequent Steps will process the pick points, select Port(s) related to pick point under Select port. Then, select the Original point cloud acquired by camera option, and the output point cloud data will be used for collision detection in path planning.

If there are other needs on site, configure the relevant output ports according to actual needs.

Now, you have adjusted the relevant parameters. Click the Save button to save the changes.

Key Considerations for Application

In actual applications, you should understand and consider the following, then add the 3D Target Object Recognition Step to your project, and connect the data ports to quickly and accurately recognize poses of target objects.

  • The “3D Target Object Recognition” Step is generally used in conjunction with the Capture Images from Camera Step. The Step is suitable for workpiece loading scenarios. It is capable of recognizing workpieces of various shapes and stacking methods, including separate arrangements, orderly single-layer stacking, orderly multi-layer stacking, and random stacking.

  • The “3D Target Object Recognition” Step is usually followed by a Step for pose adjustment, such as the Adjust Poses V2 Step.

    This example project is to demonstrate how to accurately identify the poses of target objects when they are neatly arranged, and thus it omits the pose adjustment process.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.