Improve the Recognition Accuracy

Vision recognition errors reflect the recognition accuracy and repeatability of the vision project.

If the vision project uses “3D matching” algorithms for recognition, refer to Improve the Accuracy of 3D Matching to improve the recognition accuracy of the project.

If the vision project uses “deep learning” algorithms for recognition, refer to Improve the Effect of Deep Learning Inference to improve the recognition accuracy of the project.

Improve the Accuracy of 3D Matching

You can improve 3D matching accuracy in the following ways:

  • Ensure the camera point cloud quality.

  • Ensure the accuracy of the point cloud models and pick point settings.

  • Check the settings of the matching algorithm.

Ensure the Camera Point Cloud Quality

If you have checked the camera’s point cloud quality at the “Vision System Hardware Setup” stage, you can skip this section.

If the camera’s point cloud quality has not been checked, refer to Check the Camera Point Cloud Quality.

Ensure the Accuracy of Point Cloud Models and Pick Point Settings

Ensure the Quality of the Point Cloud Model

When generating the point cloud model from images captured by a camera, pay attention to the following requirements:

  • Choose the right model type: When the target object is flat but shows clear and fixed edge characteristics in the images (such as panels, track shoes, connecting rods, brake discs, etc.), it is recommended to use an edge model. When the surface of the target object has many undulations (such as crankshafts, rotors, steel rods, etc.), it is recommended to use a surface model.

  • Remove noise: Noise in the point cloud used to make the model can cause incorrect recognition. Therefore, keep only the point cloud of the target object and remove other noise.

When generating the point cloud model from imported CAD files, set the model’s units correctly. Otherwise, model matching will fail all the time.

Ensure Pick Points Are Set Correctly

Mech-Vision allows you to add a pick point to the point cloud model using either the drag-and-drop method or the teaching method.

For scenarios where the accuracy requirement is relatively high, the directions of the workpieces are relatively consistent, and the TCP error of the robot is not easy to evaluate, it is recommended to use the teaching method.

When using the teaching method to add pick points, note the following issues:

  • Please select the correct Euler angle convention when entering the pick point flange pose.

  • Please pay attention to the data length when entering the pick point flange pose. The format of the flange pose reading from the teach pendant must be consistent with that on the pose editor interface. If the flange pose uses Euler angles, you need to enter 6 data; if it uses quaternions, you need to enter 7 data.

  • For a 7-axis robot, both the flange pose displayed on the teach pendant and the flange pose within the software should consistently either include or exclude the value of the seventh axis.

Improve the Effect of Deep Learning Inference

Improve the effect of deep learning inference in the following ways:

  • Improve the quality of 2D images. 2D images used for deep model training should meet the following requirements:

    • Ensure that the images are free from overexposure and underexposure situations.

    • Ensure that the colors of the images closely resemble the real objects without any color distortion issues.

    • Capture a sufficient quantity of images and ensure diversity in the types of images.

      For more requirements for images, refer to Acquire Image Data for Deep Learning.

  • Iterate the model. After a certain period of use, you may find that the trained model may not apply to certain scenarios. You can iterate the model by fine-tuning the model to improve the accuracy of the model.

  • Adjust deep learning parameters. By adjusting the deep learning parameters, you can obtain the best deep learning inference effect. For details, refer to the parameter description for the “Deep Learning Model Package Inference” Step.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.