Solution Deployment

This section introduces the deployment of the Long Sheet Metal Parts solution. The overall process is shown in the figure below.

solution configuration overview

Vision System Hardware Setup

Vision system hardware setup refers to integrating the hardware (camera and industrial PC) into the actual environment to support the normal operation of the vision system.

In this phase, you need to install and set up the hardware of the vision system. For details, refer to Vision System Hardware Setup.

Robot Communication Configuration

Before configuring robot communication, it is necessary to obtain the solution first. Click here to see how to obtain the solution.
  1. Open Mech-Vision.

  2. In the Welcome interface of Mech-Vision, click Create from solution library to open the Solution Library.

  3. Enter the Typical cases category in the Solution Library, click the get resource icon in the upper right corner for more resources, and then click the Confirm button in the pop-up window.

  4. After acquiring the solution resources, select the Long Sheet Metal Parts solution under the Neatly-arranged part picking category, fill in the Solution name and Path at the bottom, and finally click the Create button. Then, click the Confirm button in the pop-up window to download the Long Sheet Metal Parts solution.

    Once the solution is downloaded, it will be automatically opened in Mech-Vision.

Before deploying a vision project, you need to set up the communication between the Mech-Mind Vision System and the robot side (robot, PLC, or host computer).

The Long Sheet Metal Parts solution uses Standard Interface communication. For detailed instructions, please refer to Standard Interface Communication Configuration.

Hand-Eye Calibration

Hand-eye calibration establishes the transformation relationship between the camera and robot reference frames. With this relationship, the object pose determined by the vision system can be transformed into that in the robot reference frame, which guides the robot to perform its tasks.

Please refer to Robot Hand-Eye Calibration Guide and complete the hand-eye calibration.

  • Every time the camera is mounted, or the relative position of the camera and the robot changes after calibration, it is necessary to perform hand-eye calibration again.

  • In this solution, hand-eye calibration is required for both LSR XL and PRO S. Only one hand-eye calibration is required for Projects 1 and 2 since they both use LSR XL.

Vision Project Configuration

After completing the communication configuration and hand-eye calibration, you can use Mech-Vision to configure the vision project.

This solution consists of three projects: Vis-Bin Positioning, Vis-Camera Positioning, and Vis-Workpiece Positioning.

  • Vis-Bin Positioning project, which uses LSR XL to capture images, locates the bin in which long sheet metal parts are placed.

  • Vis-Camera Positioning project, which uses LSR XL to capture images, locates the image-capturing position for PRO S and outputs the image-capturing pose to guide the robot to move to the position.

  • Vis-Workpiece Positioning project, which uses PRO S to capture images, recognizes and outputs the target object poses used in path planning which guides the robot to pick target objects.

The following sections introduce the three projects respectively.

Vis-Bin Positioning

The process of how to configure this vision project is shown in the figure below.

bin vision overall

Connect to the Camera and Capture Images

  1. Connect to the camera.

    Open Mech-Eye Viewer, find the camera to be connected, and click the Connect button.

    vision click connect camera
  2. Adjust camera parameters.

    To ensure that the captured 2D image is clear and the point cloud is intact, you need to adjust the camera parameters. For detailed instructions, please refer to LSR XL Parameter Reference Guide.

  3. Capture images.

    After the camera is successfully connected and the parameter group is set, you can start capturing the target object (bin) images. Click the vision click capture icon button on the top to capture a single image. At this time, you can view the captured 2D image and point cloud of the target object (bin). Ensure that the 2D image is clear, the point cloud is intact, and the edges are clear. The qualified 2D image and point cloud of the target object (bin) are shown on the left and right in the figure below respectively.

    camera vision image and cloud
  1. Connect to the camera in Mech-Vision.

    Select the Capture Images from Camera Step, disable the Virtual Mode option in the Step Parameters area, and click the Select camera button.

    vision select camera

    In the pop-up window, click the vision connect camera before icon icon on the right of the camera serial number. When the icon turns into vision connect camera after icon, the camera is connected successfully. After the camera is connected successfully, you can select the camera calibration parameter group in the drop-down list on the right.

    Now that you have connected to the real camera, you do not need to adjust other parameters. Click the vision run step camera icon icon on the Capture Images from Camera Step to run the Step. If there is no error, the camera is connected successfully and the images can be captured properly.

3D Target Object Recognition

This solution uses the 3D Target Object Recognition Step to recognize the bin. Click the Config wizard button in the Step Parameters panel of the 3D Target Object Recognition Step to open the 3D Target Object Recognition tool to configure relevant settings. The overall configuration process is shown in the figure below.

vision 3d target object recognition overall
Point Cloud Preprocessing

Before point cloud preprocessing, you need to preprocess the data by adjusting the parameters to make the original point cloud clearer, thus improving the recognition accuracy and efficiency.

  1. Set recognition region.

    Set an effective recognition area to block out interference factors and improve recognition efficiency.

  2. Adjust parameters.

    Set the Edge extraction effect, Noise removal level, and Point filter parameters to remove noise.

After point cloud preprocessing, click the Run Step button.

vision bin point cloud preprocessing effect
Recognize Target Object

After point cloud preprocessing, you need to create a point cloud model for the bin in the Target Object Editor, and then set matching parameters in the 3D Target Object Recognition tool for point cloud model matching.

  1. Create a target object model.

    Create a point cloud model and add the pick point. Click the Open target object editor button to open the editor, generate a point cloud model and pick points by importing the processed point cloud. The pick point should be consistent with the object center point.

  2. Set parameters related to object recognition.

    • Matching mode: Disable Auto-set matching mode and set the Coarse match mode to Edge matching and the Fine matching mode to Surface matching.

    • Output—Max outputs: Since the target object is a bin, set the Max outputs to 1.

After setting the above parameters, click the Run Step button. The matching result is shown in the figure below.

vision bin recognition effect
Configure Step Ports

After target object recognition, Step ports should be configured to provide vision results and point clouds for Mech-Viz for path planning and collision detection.

To obtain the position information of the real bin, select the Port(s) related to object center point option under Select port, and then select the Point cloud after processing option and click the Save button. New output ports are added to the 3D Target Object Recognition Step, as shown below.

Adjust Poses

After obtaining the target object pose, adjust the pose using the Adjust Poses Procedure. Double-click the Procedure to view the Steps. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the Adjust Poses tool for pose adjustment configuration.

  1. Transform poses.

    To output the bin pose in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the pose from the camera reference frame to the robot reference frame.

  2. Pose adjustment.

    • Enable Custom mode for Pose adjustment.

    • Set the parameters in the Translate pose along specified direction category: In the Object reference frame, move the bin pose along the Z-axis. Then, set the Translation distance to Input from external Step to move the bin pose from the top surface of the bin down to the bin center, which will be used later to update the bin collision model in Mech-Viz.

      • In this Procedure, the Read Object Dimensions Step will read the dimensions of the bin, and the Z value of the bin output by the Decompose Object Dimensions Step will be used to calculate the Translation distance in the Adjust Poses V2 Step.

      • Translation distance = -1 × 1/2 Bin height

  3. Sort poses.

    Set the Sorting type to Sort by X/Y/Z value of pose, set Specified value of the pose to Z-coordinate, and sort the poses in Descending order.

  4. General settings.

    Set the parameter Set number of new ports to 0. No new ports are needed.

Output Scene Point Cloud

Use the Output Step to send information such as bin details, the scene point cloud of the bin, and the preprocessed point cloud to Mech-Viz.

Vis-Camera Positioning

The process of how to configure this vision project is shown in the figure below.

camera vision overall

Connect to the Camera and Capture Images

  1. Connect to the camera.

    Open Mech-Eye Viewer, find the camera to be connected, and click the Connect button.

    vision click connect camera
  2. Adjust camera parameters.

    To ensure that the captured 2D image is clear and the point cloud is intact, you need to adjust the camera parameters. For detailed instructions, please refer to LSR XL Parameter Reference Guide.

  3. Capture images.

    After the camera is successfully connected and the parameter group is set, you can start capturing the target object images. Click the vision click capture icon button on the top to capture a single image. At this time, you can view the captured 2D image and point cloud of the target object. Ensure that the 2D image is clear, the point cloud is intact, and the edges are clear. The qualified 2D image and point cloud of the target object are shown on the left and right in the figure below respectively.

    camera vision image and cloud
  4. Connect to the camera in Mech-Vision.

    Select the Capture Images from Camera Step, disable the Virtual Mode option in the Step Parameters area, and click the Select camera button.

    vision select camera

    In the pop-up window, click the vision connect camera before icon icon on the right of the camera serial number. When the icon turns into vision connect camera after icon, the camera is connected successfully. After the camera is connected successfully, you can select the camera calibration parameter group in the drop-down list on the right.

    Now that you have connected to the real camera, you do not need to adjust other parameters. Click the vision run step camera icon icon on the Capture Images from Camera Step to run the Step. If there is no error, the camera is connected successfully and the images can be captured properly.

Foreground Segmentation and Point Cloud Preprocessing

To reduce the interference of the background point cloud and improve the stability and accuracy of target object recognition, the Procedure Foreground Segmentation and Point Cloud Preprocessing is used to recognize the target object mask and obtain the point clouds of the target objects on the highest layer and the edges of the target objects.

Foreground Segmentation

To reduce the interference of the background point cloud and improve recognition stability, the Foreground Segmentation Procedure is used to recognize the target object mask and extract the target object point cloud. Double-click the Procedure Foreground Segmentation and Point Cloud Preprocessing to open the Steps.

  1. In the Deep Learning Model Package Inference Step, import the foreground segmentation model package. Click here to download the package.

    Click the Config wizard button to open the Deep Learning Model Package Management window. Then, click the Import button to import the downloaded foreground segmentation model package.

  2. Label images. Prepare an image of a bin full of target objects. In the Step Parameters panel, under Tips, go to the Prompts Settings category, and click theOpen the editor button on the right of the Mask Prompts Editor to open the Prompt Tool window.

    On the Label Prompt tab, click the Obtain image button in the upper right corner. Then, click the polygonal button in the upper left corner to select the Polygon Tool. Use this tool to label the mask of the target object. Avoid the bin mask and background mask when labeling the target object mask.

    Click the Save button in the lower right corner and switch to the Run inference tab. Then, click the Run Step button in the lower right corner.

segmentation effect
Point Cloud Preprocessing

Go back to the main interface of the project, adjust relevant parameters in the Step Parameters panel on the right to remove noise, obtain the point cloud of the target objects on the highest layer and extract the point cloud of the target object edges based on the 2D image of the target objects.

  1. Set 3D ROI. Click the Open the editor button in the Extract 3D Points in 3D ROI category to set a 3D ROI. Remove the interfering point cloud, frame only the point cloud of the target objects and bin, and keep a certain margin around the bin.

  2. Set the value of the parameter Min Point Count per Cluster in the Point Cloud Clustering category. Adjust the value of this parameter according to the actual situation on site. Typically, when the value of this parameter is between the number of points in the noise point cloud and that in the target object point cloud, it removes the noise point cloud and retains the target object point cloud.

  3. Set the value of the Layer Height parameter in the Get Highest Layer Clouds category to only obtain the point cloud of the target objects on the highest layer.

After configuration, click the Single Step Execution button in the upper right corner of the Procedure. The result is shown in the figure below.

point cloud preprocessing effect

3D Matching

Use the 3D Matching Step to locate the long sheet metal parts and output the poses and labels of the pick points of the target objects. Adjust the following parameters in the Step Parameters panel on the right:

  1. Create a target object model. In the Model Settings category, click the Target object editor button to open the editor. Import the processed point cloud to generate the point cloud model and pick points based on the features of the target object on site. The pick point should be consistent with the object center point.

  2. In the Parameter Tuning Level category, set the Parameter Tuning Level to Expert.

  3. Adjust the parameters in the Coarse Matching Settings category: Keep the default value standard of the Performance Mode parameter. Switch off the toggle Auto-Set Max Outputs per Point Cloud. Change the value of Max Outputs per Point Cloud to 40. Keep the default value 1000000 of the parameter Max Point Count of Sampled Scene Point Cloud.

  4. Adjust the parameters in the Fine Matching Settings category: Keep the default values Standard and Small of the parameters Performance Mode and Deviation Correction Capacity respectively. Then, switch off the toggle Auto-Set Max Outputs per Point Cloud, and set the value of Max Outputs per Point Cloud to 20.

After setting the above parameters, click the Run Step button. The matching result is shown in the figure below.

3d matching effect

Adjust Poses

After obtaining the target object poses, use the Adjust Poses V2 Step to align the X-axis orientations of the target objects, and adjust the image-capturing position of PRO S. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the Adjust Poses tool for pose adjustment configuration. The overall configuration process is shown in the figure below.

vision adjust poses overall
  1. Transform poses.

    To output the target object poses in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the poses from the camera reference frame to the robot reference frame.

  2. Pose adjustment.

    • Enable Custom mode for Pose adjustment.

    • Set the parameters in the category Rotate pose and minimize the angle between the rotation axis and target direction: To facilitate subsequent pose adjustment, the orientations of the target objects’ poses need to be aligned. Set the Axis to be fixed to Z-axis, the Axis to be rotated to X-axis, and the Positive X-direction of the robot reference frame to the target direction.

    • The field of view of PRO S is small. To ensure that the camera captures satisfactory images at the optimal capture distance, you need to adjust the parameters in the category Translate pose along specified direction twice:

      • Set the parameters in the category Translate pose along specified direction: Since PRO S has a small field of view, it is necessary to move the target object pose from the center to one end of the target object. In the Object reference frame, move the pose along the X-axis. Set the Translation distance to Enter manually, and set the value to 501 mm.

      • Set the parameters in the category Translate pose along specified direction: To ensure that PRO S capture images at the optimal distance, move the image-capturing pose above the target object. In the Robot reference frame, move the pose along the Positive Z-direction. Set the Translation distance to Enter manually, and set the value to 1000 mm.

        Set the above two values of Translation distance based on the actual situation, so that PRO S can achieve the best imaging effect. After configuration, the following two conditions for capturing images should be satisfied:

        • The bin does not obscure the target objects.

        • The captured part of the bin should be as small as possible to reduce the impact of reflection from the bin walls.

  3. Sort poses.

    Set the Sorting type to Sort by X/Y/Z value of pose, set Specified value of the pose to Z-coordinate, and sort the poses in Descending order.

  4. General settings.

    Set the parameter Set number of new ports to 1, and a new input and output port will be added to the Step. Connect the input port to the Pick Point Info output port of the 3D Matching Step and connect the output port to the Output Step.

Output Image-Capturing Points

Use the Procedure Out Step to send the pick point, pick point information, and preprocessed point cloud to the robot, and then trigger PRO S to capture images when the robot moves to this position.

Object Recognizing and Picking

The process of how to configure this vision project is shown in the figure below.

workpiece positioning vision overall

Connect to the Camera and Capture Images

  1. Connect to the camera.

    Open Mech-Eye Viewer, find the camera to be connected, and click the Connect button.

    vision click connect camera
  2. Adjust camera parameters.

    To ensure that the captured 2D image is clear and the point cloud is intact, you need to adjust the camera parameters. For detailed instructions, please refer to PRO S Camera Parameter Reference.

  3. Capture images.

    After the camera is successfully connected and the parameter group is set, you can start capturing the target object images. Click the vision click capture icon button on the top to capture a single image. At this time, you can view the captured 2D image and point cloud of the target object. Ensure that the 2D image is clear, the point cloud is intact, and the edges are clear. The qualified 2D image and point cloud of the target object are shown on the left and right in the figure below respectively.

    workpiece positioning vision image and cloud
  4. Connect to the camera in Mech-Vision.

    Select the Capture Images from Camera Step, disable the Virtual Mode option in the Step Parameters panel, and click the Select camera button. In the pop-up window, click the vision connect camera before icon icon on the right of the camera serial number. When the icon turns into vision connect camera after icon, the camera is connected successfully. After the camera is connected successfully, you can select the camera calibration parameter group in the drop-down list on the right.

    Now that you have connected to the real camera, you do not need to adjust other parameters. Click the vision run step camera icon icon on the Capture Images from Camera Step to run the Step. If there is no error, the camera is connected successfully and the images can be captured properly.

Point Cloud Preprocessing

To remove the interfering point cloud, obtain the point cloud of the highest layer, and extract the edge point cloud and surface point cloud of the target objects, the Point Cloud Preprocessing Procedure is used. Adjust relevant parameters in the Step Parameters panel on the right:

  1. Set 3D ROI. Click the Open the editor button in the Extract 3D Points in 3D ROI category to set a 3D ROI. Remove the interfering point cloud, keep only the point cloud of the target objects, and keep a certain margin around the bin.

  2. Set the value of the Layer Height parameter in the Get Highest Layer Clouds category to only obtain the point cloud of the target objects on the highest layer.

After configuration, click the Single Step Execution button in the upper right corner of the Procedure. The results of extracting the edge point cloud of the target objects and obtaining the point cloud of the target objects on the highest layer are respectively shown in the left and right figures below.

workpiece positioning point cloud preprocessing effect

3D Matching

Use the 3D Matching Step to perform edge matching for the thing sheet metal parts. The point cloud of one end of the part is located, based on which the pick point is output. Adjust the following parameters in the Step Parameters panel on the right:

  1. Create a target object model. In the Model Settings category, click the Target object editor button to open the editor. Import the processed point cloud to generate the point cloud model and pick points based on the features of the target object on site. Set the object center point at the center of the point cloud, and set the pick point at one end of the target object.

    Unlike the previous two projects, the pick point is not at the object center point but at the end of the object.
  2. In the Parameter Tuning Level category, set the Parameter Tuning Level to Expert.

  3. Adjust the parameters in the Coarse Matching Settings category: Keep the default value standard of the Performance Mode parameter. Switch off the toggle Auto-Set Max Outputs per Point Cloud. Keep the value 1000000 of the parameter Max Point Count of Sampled Scene Point Cloud.

  4. Adjust the parameters in the Fine Matching Settings category: Keep the default values Standard and Small of the parameters Performance Mode and Deviation Correction Capacity respectively. Then, switch on the toggle Auto-Set Max Outputs per Point Cloud.

  5. Adjust the parameters in the Extra Fine Matching category: Switch on the toggle Extra Fine Matching. Keep the default values Standard and Small of the parameters Performance Mode and Deviation Correction Capacity respectively.

After setting the above parameters, click the Run Step button. The matching result is shown in the figure below.

workpiece positioning 3d matching effect

Adjust Poses

After obtaining the target object poses, use the Adjust Poses V2 Step to adjust them, by moving the target object poses from the ends to the centers of the target object, and then sorting and filtering them. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the Adjust Poses tool for pose adjustment configuration.

  1. Transform poses.

    To output the target object poses in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the poses from the camera reference frame to the robot reference frame.

  2. Adjust pose orientations.

    Set Orientation to Auto alignment and Application scenario to Align Z-axes (Machine tending) to ensure that the robot picks in a specified direction, thereby avoiding collisions.

  3. Translate poses along the specified direction.

    In the Object reference frame, move the bin pose along the X-axis and manually adjust the Translation distance to -659 mm to move the target object pose from one end to the center of the target object.

    Translation distance = -1 × 1/2 Target object length
  4. Sort poses.

    Set the Sorting type to Sort by X/Y/Z value of pose, set Specified value of the pose to Z-coordinate, and sort the poses in Descending order.

  5. Filer by pose.

    To reduce the time required for subsequent path planning, target objects that cannot be easily picked need to be filtered based on the angle between the Z-axis of the pose in the robot reference frame and the reference direction. In this tutorial, you need to set Max angle difference to 30°.

  6. General settings.

    Set the parameter Set number of new ports to 1, and a new input and output port will be added to the Step. Connect the input port to the Pick Point Info output port of the 3D Matching Step and connect the output port to the Output Step.

Output Target Object Information

Use the Output Step to output the pick point, pick point information, preprocessed point cloud, etc., to Mech-Viz for path planning.

Path Planning

Once the target object recognition is complete, you can use Mech-Viz to plan a path and then write a robot program for picking the target objects.

The process of path planning configuration is shown in the figure below.

viz overall

Configure Scene Objects

Scene objects are introduced to make the scene in the software closer to the real scenario, which facilitates the robot path planning. For detailed instructions, please refer to Configure Scene Objects.

To ensure effective picking, scene objects should be configured to accurately represent the real operating environment. The scene objects in this solution are configured as shown below:

viz scene objects configuration effect

Configure Robot Tool

The end tool should be configured so that its model can be displayed in the 3D simulation area and used for collision detection. For detailed instructions, please refer to Configure Tool.

  • To save time when creating a collision model for the end tool, it’s not always necessary for the convex hulls you create to replicate every detail of the original model. You can omit certain details based on the specific requirements of the model.

  • The gripper should be carefully made to closely match the actual object, which is essential for accurate collision detection. For mechanical structures that are farther away from the pick point (target object), the design can be simplified by using cuboid convex hulls instead of complex structural designs to improve efficiency. The figure below shows the original model on the left and the simplified model on the right.

    viz end tool configuration effect

Adjust the Workflow

The workflow refers to the robot motion control program created in Mech-Viz in the form of a flowchart. After the scene objects and end tools are configured, you can adjust the project workflow according to the actual requirements. The workflow of this project is shown in the figure below.

viz workflow

The Mech-Viz project consists of the following three branches:

  • The first branch corresponds to the Vis-Bin Positioning project in Mech-Vision. Use the Visual Recognition Step to trigger the project, which recognizes the bin pose and updates the scene object information.

  • The second branch corresponds to the Vis-Camera Positioning project in Mech-Vision. Use the Visual Recognition Step to trigger the project, which calculates and outputs the image-capturing point of the camera where the robot is guided to reach.

  • The third branch corresponds to the Vis-Workpiece Positioning project in Mech-Vision. Use the Visual Recognition Step to trigger the project, which outputs the poses of the long sheet metal parts to Mech-Viz for path planning and guiding the robot to pick.

Simulate and Test

Click the Simulate button on the toolbar to test whether the vision system is set up successfully by simulating the Mech-Viz project.

Place the target object neatly in the bin and click Simulate in the Mech-Viz toolbar to simulate picking the target object. After each successful picking, the target object should be rearranged, and 10 simulation tests should be conducted. If the 10 simulations all lead to successful pickings, the vision system is successfully set up.

If an exception occurs during simulation, refer to the Solution Deployment FAQs to resolve the problem.

Robot Picking and Placing

Write a Robot Program

If the simulation result meets expectations, you can write a pick-and-place program for the KUKA robot.

The example program MM_S2_Viz_Basic for the FANUC robot can basically satisfy the requirements of this typical case. You can modify the example program. For a detailed explanation of this program, please refer to S2 Example Program Explanation.

Modification Instruction

Based on the example program, please modify the program files by following these steps:

  1. Set the tool reference frame and base reference frame.

    Before modification After modification (example)
       ;set current tool no. to 1
       BAS(#TOOL,1)
       ;set current base no. to 0
       BAS(#BASE,0)
       ;set current tool no. to 2
       BAS(#TOOL,2)
       ;set current base no. to 0
       BAS(#BASE,0)
    Please replace the number with the number of the actual tool being used, where “2” and "0" are examples only.
  2. Specify the IP address of the IPC. The XML_Kuka_MMIND.xml configuration file is loaded to the KUKA robot when the Standard Interface program is loaded to the robot. You can modify the IP address and port of the IPC in the configuration file before loading. If they are not already modified, you can open the XML_Kuka_MMIND.xml file specified by the MM_Init_Socket command and update the IP address and port in the XML file to those of the IPC.

  3. Add a branch command to select the branch to run and trigger the Mech-Viz project.

    Before modification After modification (example)
       ;trigger {product-viz} project
       MM_Start_Viz(2,init_jps)
       !trigger {product-viz} project ;
       MM_Start_Viz(2,init_jps)
       ;Set_Branch
       MM_Set_Branch(1,1)
    • Please refer to the instruction above to add commands that select branches 2 and 3 and execute the Mech-Viz project.

    • How the three branches of the Mech-Viz project work together:

      • The robot program first selects branch 1 and executes the Mech-Viz project to trigger the vision project Vis-Bin Positioning that outputs the bin pose.

      • If a bin pose is output, the robot program then selects branch 2 to execute the Mech-Viz project to trigger the vision project Vis-Camera Positioning to output the image-capturing position.

      • If a camera image-capturing position is output, the robot program will control the robot to move to the image-capturing position. Then, the robot program selects branch 3 and executes the Mech-Viz project to trigger the vision project Vis-Workpiece Positioning, thereby planning a collision-free picking path for the robot.

  4. Set the signal for the DO port to perform picking, i.e., to close the gripper and pick the target object. Note that the DO command should be set according to the actual DO port number used on site.

    Before modification After modification (example)
       ;add object grasping logic here, such as "$OUT[1]=TRUE"
       halt
      ;add object grasping logic here, such as "$OUT[2]=TRUE"
       $OUT[2]=TRUE
       halt
  5. Set the DO port to perform placing. Note that the DO command should be set according to the actual DO port number used on site.

    Before modification After modification (example)
       ;add object releasing logic here, such as "$OUT[1]=FALSE"
       halt
      ;add object grasping logic here, such as "$OUT[2]=FALSE"
       $OUT[2]=FALSE
       halt

Picking Test

To ensure stable production in the actual scenario, the modified example program should be run to perform the picking test with the robot. For detailed instructions, please refer to Test Standard Interface Communication.

Before performing the picking test, please teach the Home position, namely, the taught initial position. The initial position should be away from the objects to be picked and surrounding devices, and should not block the camera’s field of view.

After teaching, arrange the target objects as shown in the table below, and use the robot to conduct picking tests for all arrangements at a low speed.

The picking tests can be divided into three phases:

Phase 1: Picking Test for Normal Infeed Status

Object placement status

Illustration

Target objects are neatly arranged, and the highest layer is full.

picking test 1

Target objects are neatly arranged, and the lowest layer is full.

picking test 2

Phase 2: Picking Test in Abnormal Infeed Condition

Object placement status

Illustration

Target objects rotate in the plane.

picking test 3

Phase 3: Picking Test for Real Scenario

Object placement status

Illustration

When the bin is full of target objects and is in the normal infeed status, simulate the scenario where a target object drops out of place.

picking test 4

If the robot successfully picks the target object(s) in the test scenarios above, the vision system can be considered successfully deployed.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.