Usage Scenarios for Deep Learning
2D Camera + Deep Learning
2D Camera combined with deep learning can be used in the following scenarios. Different modules can be applied in different scenarios.
Fast Positioning
This module is used to correct the workpiece orientations.
-
Recognize workpiece orientations in images and rotate the images to a specified orientation.
Defect Segmentation
This module can be used to detect all types of defects. Defects that can be detected include surface defects such as stains, bubbles, and scratches and positional defects such as bending, abnormal shape, and absence. This module can be applied under difficult conditions such as small defects, complicated backgrounds, and unstable workpiece positions.
-
Detect air bubbles and glue spill defects on lens surfaces.
-
Detect bending defects of workpieces.
Classification
This module is used to recognize the front and back of workpieces, workpiece orientations, and defect types, and to recognize whether objects are missing, or whether objects are neatly arranged.
-
Recognize whether workpieces are neatly arranged or scattered.
-
Recognize the fronts and backs of workpieces.
3D Camera + Deep Learning
In these scenarios, using only the information in point clouds cannot achieve accurate recognition and positioning of workpieces. Therefore, 3D matching + deep learning is required to recognize and position workpieces.
Points Missing from Point Cloud
The workpieces in the following figures are used as an example.
-
2D images: Large quantities of reflective workpieces in the image below are closely fitted together, but the edges and shape features of the workpieces are clear.
-
Point cloud: As the workpieces are reflective, many points are missing from the point cloud. Missing points are mostly along the workpiece axis.
If points along the workpiece axis are missing from the point cloud, 3D matching using the point cloud may be inaccurate, resulting in largely deviated picking poses. If the workpieces are closely fitted together, the point cloud may not be correctly segmented, resulting in incorrect matching. If the quantity of workpieces is large, the vision cycle time will also be long.
In such scenarios, you can use the Instance Segmentation module to train the corresponding model, and then use the Deep Learning Steps in Mech-Vision to recognize the workpiece. Then, extract the point cloud corresponding to the mask of the workpiece, and calculate workpiece pose A based on matching. Finally, calculate workpiece pose B based on the extracted point cloud and correct the X and Y components of pose A.
Key Features Missing from Point Cloud
The workpieces in the following figures are used as an example.
-
2D image: In the figure below, the workpieces in the red boxes have their fronts facing up, while the workpieces in the blue boxes have their backs facing up. The arrows point to key features used to distinguish the fronts and backs.
-
Point cloud: In the point cloud, the key features used to distinguish the front and back of the workpieces are missing.
As the key features used to distinguish workpiece types are very small (even missing from the point cloud), when 3D matching is used to recognize different workpiece types, the matching result may be incorrect, which results in wrong classification of the workpieces.
In such scenarios, you can use the Instance segmentation module to train the corresponding model, and set corresponding labels for different types of workpieces. Use this model in the Deep Learning Steps in Mech-Vision, and the Step will not only extract the masks of each workpiece but also output the label of this workpiece.
Almost No Workpiece Point Cloud
The workpieces in the following figures are used as an example.
-
2D image: Wave washers are reflective and are placed close to each other in the bin.
-
Point cloud: The point clouds of the workpieces have unstable qualities. The probability of the points corresponding to a workpiece being completely missing from the point cloud is relatively large.
As many object features are missing from the point clouds, it is impossible to used the point clouds for locating the workpieces and calculating their poses. 3D matching using the point clouds can also result in incorrect matching of the bin bottom.
In such scenarios, although the workpieces are reflective, their edges are clear in the 2D images. Therefore, you can use the Instance Segmentation module to train the corresponding model and use this model in the Deep Learning Steps in Mech-Vision. The Steps output workpiece masks, and workpiece point clouds are extracted on this basis. The poses of these point clouds are then calculated and used as the picking pose.
Locate Patterns and Colored Region on Workpiece Surface
The workpieces in the following figures are used as an example.
-
2D image: A piece of yellow tape is attached to one side of the aluminum bin, used to mark the orientation of the bin.
-
Point cloud: The quality of the point clouds is good, but the yellow tape cannot be reflected by the point clouds.
As the target feature is a colored region and is only visible in the color images, the orientation of the bin cannot be recognized using the point clouds.
In such scenarios, as long as the rough location of the yellow tape is determined, the orientation of the aluminum bin can also be determined. You can use the Object Detection module to train the corresponding model, and use this model in the Deep Learning Steps in Mech-Vision to locate the workpieces.
Random Picking from Deep Bin
The workpieces in the following figures are used as an example.
-
2D image: Steel bars are randomly piled in the bin, and some regions of some steel bars reflect light. In addition, the steel bars overlap each other.
-
Point cloud: For workpieces not overlapped by other workpieces, the quality of their point clouds is good. For overlapped workpieces, it is difficult to cluster out the point cloud of each individual workpiece, and the performance of the clustering is unstable.
Point cloud clustering cannot stably cluster out each individual workpiece. It is difficult to create a suitable model for 3D matching, as the orientation of the workpieces varies greatly. 3D matching might also output incorrect matching results, which lead to inaccurate pose calculation. In addition, using only point cloud models for global matching will result in a very long vision cycle time.
In this kind of scenarios, you can use the Instance Segmentation module to train the corresponding model, and use this model in the Deep Learning Steps in Mech-Vision to extract the masks of individual workpieces. Then, extract the individual workpiece point clouds corresponding to the masks, and then use the point clouds for matching of individual workpieces.
Workpieces Closely Fitted and Point Cloud Clustering Fails to Separate Workpiece Point Clouds
The workpieces in the following figures are used as an example.
-
2D image: The obtained 2D images are too dark, and object edge information may not be discernable. After the image brightness is enhanced, the workpiece edge, size, and grayscale value information can be obtained.
-
Point cloud: The quality of the point clouds is good, but the workpieces are closely fitted together, and therefore the edges of different workpieces may not be correctly separated.
Although the quality of the point clouds is good, but the workpieces are not separated in the point clouds, so point cloud clustering cannot be used to segment the point clouds of individual workpieces. Global 3D matching might output incorrect matching results or even matching to the bin.
In such scenarios, you can use Instance Segmentation to train the corresponding model. Use the model in the Deep Learning Steps in Mech-Vision to extract the workpiece point clouds and then perform matching.
Recognize and Pick Workpieces of Different Types
The workpieces in the following figures are used as an example.
-
2D image: The first image is obtained at the station where the workpieces are randomly placed. The second image is obtained at the station where secondary judgment is performed. The quality of the 2D images at both stations is good, and the features of different types of workpieces are distinctive.
-
Point cloud: the quality of the point clouds is good, and the shape features of the workpieces are well reflected in the point clouds.
For cylindrical workpieces, the point clouds cannot reflect the orientation or the type of the workpiece, so using only 3D matching cannot distinguish the workpiece orientation and type.
In this kind of scenarios, at the first station, you can use the Instance Segmentation module to train the corresponding model, and use the model in the Deep Learning Steps in Mech-Vision to recognize and segment the workpieces. The Steps output the corresponding workpiece masks, which can be used for subsequent point cloud processing.
At the second station, use Instance Segmentation to recognize individual workpieces, and then use Object Detection to determine the orientations of the workpieces based on the shape and surface features. (The left figure below is the instance segmentation result, and the right figure below is the object detection result of determining the orientations of the workpieces.)