3D Robot Guidance

How can I reduce missed recognition?
  1. Check whether enough data is acquired.

  2. Ensure all objects that need to be recognized are labeled, and avoid any missed labeling.

How can I reduce missed recognition of a shaft model?
  1. Improve image quality: Adjust the 2D exposure parameters of the camera.

  2. Improve label quality: Increase the amount of images that are used for training, accurately label all shafts that need to be recognized, and avoid any missed labeling.

  3. Train the model again.

How can I use Mech-DLK to train a high-quality Instance Segmentation module for small objects?

Cascade multiple algorithm modules to segment complex scenarios. It is necessary to acquire and label required data for each module to ensure model effectiveness. For more information, see the crankshaft loading case.

How to train a model to distinguish between the front and back of objects?
  1. Train an Instance Segmentation model: Segment all the objects to be recognized in the images.

  2. Train a Classification model: Label the images of segmented objects as "front" or "back".

If the distinguishing features between the front and back of the objects are clear in the original images and the amount of images is sufficient, you can directly use the Instance Segmentation model to create "front" and "back" labels and label the objects.

How to train a model to distinguish object orientation?
  1. Train an Instance Segmentation model: Segment all the objects to be recognized in the images.

  2. Train an Object Detection model: Label distinct patterns or text features on the objects, and adjust the picking pose of robots based on these features.

The model cannot distinguish between a single large carton with a seam and two small cartons placed closely together. How to solve the problem?

In most cases, you can acquire images that report poor recognition results to iterate the model and improve accuracy. However, when the target objects have extremely similar features, it can be difficult to accurately distinguish them, even manually. In such cases, multiple model iterations may still not guarantee accuracy. You can apply the following methods based on the actual application environment:

  • Obscure ambiguous features. For example, use opaque tape to cover the seam of the large cartons to prevent the cartons from being mistakenly recognized as two small cartons.

  • Train models independently. Acquire images of the two types of cartons separately, train two models, and ensure that both types of cartons do not appear together at the production site.

  • Standardize recognition criteria. For example, train the model to recognize all cartons as small cartons regardless of their actual type. However, this method may result in failure to grasp the large cartons. Similarly, train the model to recognize all cartons as large cartons, but this may result in the robot picking up two small cartons at once.

Is deep learning suitable for bin picking scenarios?

Yes. This is because deep learning can effectively recognize and locate objects in complex environments and adapt to variations in lighting, views, and backgrounds. However, if the training data differs significantly from the actual application scenario, the inference performance of the model may be affected. Additionally, deep learning requires a large amount of data for training, which may be challenging to achieve in small-scale industrial applications.

For scenarios with uniform lighting and fewer objects, template matching or feature-based machine vision methods can be used. These methods require less training data or may not require training data.

Regardless of the method used, thorough testing is required during production deployment. As environmental conditions change, it is also necessary to continuously monitor the model performance and iterate or retrain the model to ensure the system remains reliable under actual operating conditions.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.