Usage Scenarios for Deep Learning

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

2D Camera + Deep Learning

2D Camera combined with deep learning can be used in the following scenarios. Different modules can be applied in different scenarios.

Fast Positioning

This module is used to correct the workpiece orientations.

  • Recognize workpiece orientations in images and rotate the images to a specified orientation.

    quick location 1

Defect Segmentation

This module can be used to detect all types of defects. Defects that can be detected include surface defects such as stains, bubbles, and scratches and positional defects such as bending, abnormal shape, and absence. This module can be applied under difficult conditions such as small defects, complicated backgrounds, and unstable workpiece positions.

  • Detect air bubbles and glue spill defects on lens surfaces.

    defection 1
  • Detect bending defects of workpieces.

    defection 2

Classification

This module is used to recognize the front and back of workpieces, workpiece orientations, and defect types, and to recognize whether objects are missing, or whether objects are neatly arranged.

  • Recognize whether workpieces are neatly arranged or scattered.

    classification 1
  • Recognize the fronts and backs of workpieces.

    classification 2

Object Detection

This module is used to detect the absence of workpieces of fixed position, such as missing components in a PCB. It can also be used for object counting.

  • Count all rebars.

    detection 2

Instance Segmentation

This module is used to distinguish objects of a single type or of multiple types and segment their corresponding contours.

  • Segment blocks of various shapes.

    instance segmentation 1
  • Segment scattered and overlapping track links.

    instance segmentation 2
  • Segment cartons closely fitted together.

    instance segmentation 3

3D Camera + Deep Learning

In these scenarios, using only the information in point clouds cannot achieve accurate recognition and positioning of workpieces. Therefore, 3D matching + deep learning is required to recognize and position workpieces.

Points Missing from Point Cloud

The workpieces in the following figures are used as an example.

  1. 2D images: Large quantities of reflective workpieces in the image below are closely fitted together, but the edges and shape features of the workpieces are clear.

    application scenarior 1
  2. Point cloud: As the workpieces are reflective, many points are missing from the point cloud. Missing points are mostly along the workpiece axis.

    application scenarior 2

If points along the workpiece axis are missing from the point cloud, 3D matching using the point cloud may be inaccurate, resulting in largely deviated picking poses. If the workpieces are closely fitted together, the point cloud may not be correctly segmented, resulting in incorrect matching. If the quantity of workpieces is large, the vision cycle time will also be long.

In such scenarios, you can use the Instance Segmentation module to train the corresponding model, and then use the Deep Learning Steps in Mech-Vision to recognize the workpiece. Then, extract the point cloud corresponding to the mask of the workpiece, and calculate workpiece pose A based on matching. Finally, calculate workpiece pose B based on the extracted point cloud and correct the X and Y components of pose A.

application scenarior 3

Key Features Missing from Point Cloud

The workpieces in the following figures are used as an example.

  1. 2D image: In the figure below, the workpieces in the red boxes have their fronts facing up, while the workpieces in the blue boxes have their backs facing up. The arrows point to key features used to distinguish the fronts and backs.

    application scenarior 4
  2. Point cloud: In the point cloud, the key features used to distinguish the front and back of the workpieces are missing.

    application scenarior 5

As the key features used to distinguish workpiece types are very small (even missing from the point cloud), when 3D matching is used to recognize different workpiece types, the matching result may be incorrect, which results in wrong classification of the workpieces.

In such scenarios, you can use the Instance segmentation module to train the corresponding model, and set corresponding labels for different types of workpieces. Use this model in the Deep Learning Steps in Mech-Vision, and the Step will not only extract the masks of each workpiece but also output the label of this workpiece.

application scenarior 6

Almost No Workpiece Point Cloud

The workpieces in the following figures are used as an example.

  1. 2D image: Wave washers are reflective and are placed close to each other in the bin.

    application scenarior 7
  2. Point cloud: The point clouds of the workpieces have unstable qualities. The probability of the points corresponding to a workpiece being completely missing from the point cloud is relatively large.

    application scenarior 8

As many object features are missing from the point clouds, it is impossible to used the point clouds for locating the workpieces and calculating their poses. 3D matching using the point clouds can also result in incorrect matching of the bin bottom.

In such scenarios, although the workpieces are reflective, their edges are clear in the 2D images. Therefore, you can use the Instance Segmentation module to train the corresponding model and use this model in the Deep Learning Steps in Mech-Vision. The Steps output workpiece masks, and workpiece point clouds are extracted on this basis. The poses of these point clouds are then calculated and used as the picking pose.

Locate Patterns and Colored Region on Workpiece Surface

The workpieces in the following figures are used as an example.

  1. 2D image: A piece of yellow tape is attached to one side of the aluminum bin, used to mark the orientation of the bin.

    application scenarior 12
  2. Point cloud: The quality of the point clouds is good, but the yellow tape cannot be reflected by the point clouds.

    application scenarior 13

As the target feature is a colored region and is only visible in the color images, the orientation of the bin cannot be recognized using the point clouds.

In such scenarios, as long as the rough location of the yellow tape is determined, the orientation of the aluminum bin can also be determined. You can use the Object Detection module to train the corresponding model, and use this model in the Deep Learning Steps in Mech-Vision to locate the workpieces.

application scenarior 14

Random Picking from Deep Bin

The workpieces in the following figures are used as an example.

  1. 2D image: Steel bars are randomly piled in the bin, and some regions of some steel bars reflect light. In addition, the steel bars overlap each other.

    application scenarior 15
  2. Point cloud: For workpieces not overlapped by other workpieces, the quality of their point clouds is good. For overlapped workpieces, it is difficult to cluster out the point cloud of each individual workpiece, and the performance of the clustering is unstable.

    application scenarior 16

Point cloud clustering cannot stably cluster out each individual workpiece. It is difficult to create a suitable model for 3D matching, as the orientation of the workpieces varies greatly. 3D matching might also output incorrect matching results, which lead to inaccurate pose calculation. In addition, using only point cloud models for global matching will result in a very long vision cycle time.

In this kind of scenarios, you can use the Instance Segmentation module to train the corresponding model, and use this model in the Deep Learning Steps in Mech-Vision to extract the masks of individual workpieces. Then, extract the individual workpiece point clouds corresponding to the masks, and then use the point clouds for matching of individual workpieces.

application scenarior 17

Workpieces Closely Fitted and Point Cloud Clustering Fails to Separate Workpiece Point Clouds

The workpieces in the following figures are used as an example.

  1. 2D image: The obtained 2D images are too dark, and object edge information may not be discernable. After the image brightness is enhanced, the workpiece edge, size, and grayscale value information can be obtained.

    application scenarior 18
  2. Point cloud: The quality of the point clouds is good, but the workpieces are closely fitted together, and therefore the edges of different workpieces may not be correctly separated.

    application scenarior 19

Although the quality of the point clouds is good, but the workpieces are not separated in the point clouds, so point cloud clustering cannot be used to segment the point clouds of individual workpieces. Global 3D matching might output incorrect matching results or even matching to the bin.

In such scenarios, you can use Instance Segmentation to train the corresponding model. Use the model in the Deep Learning Steps in Mech-Vision to extract the workpiece point clouds and then perform matching.

application scenarior 20

Recognize and Pick Workpieces of Different Types

The workpieces in the following figures are used as an example.

  1. 2D image: The first image is obtained at the station where the workpieces are randomly placed. The second image is obtained at the station where secondary judgment is performed. The quality of the 2D images at both stations is good, and the features of different types of workpieces are distinctive.

    application scenarior 21
  2. Point cloud: the quality of the point clouds is good, and the shape features of the workpieces are well reflected in the point clouds.

    application scenarior 22

For cylindrical workpieces, the point clouds cannot reflect the orientation or the type of the workpiece, so using only 3D matching cannot distinguish the workpiece orientation and type.

In this kind of scenarios, at the first station, you can use the Instance Segmentation module to train the corresponding model, and use the model in the Deep Learning Steps in Mech-Vision to recognize and segment the workpieces. The Steps output the corresponding workpiece masks, which can be used for subsequent point cloud processing.

application scenarior 23

At the second station, use Instance Segmentation to recognize individual workpieces, and then use Object Detection to determine the orientations of the workpieces based on the shape and surface features. (The left figure below is the instance segmentation result, and the right figure below is the object detection result of determining the orientations of the workpieces.)

application scenarior 24

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.