Vision System Deployment

This section introduces how to deploy a square steel billet solution, including the vision system hardware setup and the vision solution deployment.

Vision System Hardware Setup

Vision system hardware setup refers to integrating the hardware (camera and industrial PC) into the actual environment to support the normal operation of the vision system.

In this phase, you need to install and set up the hardware of the vision system. For details, refer to Vision System Hardware Setup.

Vision Solution Deployment

This section introduces the deployment of a square steel billet vision solution. The overall process is shown in the figure below.

solution configuration overview

Robot Communication Configuration

Before robot communication configuration, it is necessary to obtain the solution first. Click here to see how to obtain the solution.
  1. Open Mech-Vision.

  2. In the Welcome interface of Mech-Vision, click Create from solution library to open the Solution Library.

  3. Enter the Typical cases category in the Solution Library, click the get resource icon in the upper right corner for more resources, and then click the Confirm button in the pop-up window.

  4. After acquiring the solution resources, select the Square Steel Billets solution under the Randomly-stacked part picking category, fill in the Solution name and Path at the bottom, and finally click the Create button. Then, click the Confirm button in the pop-up window to download the Square Steel Billets solution.

    Once the solution is downloaded, it will be automatically opened in Mech-Vision.

Before deploying a Mech-Mind vision solution, you need to set up the communication between the Mech-Mind Vision System and the robot side (robot, PLC or host computer).

The Square Steel Billets solution uses Standard Interface communication. For detailed instructions, please refer to Standard Interface Communication Configuration.

Hand-Eye Calibration

Hand-eye calibration establishes the transformation relationship between the camera and robot reference frames. With this relationship, the object pose determined by the vision system can be transformed into that in the robot reference frame, which guides the robot to perform its tasks.

Please refer to Robot Hand-Eye Calibration Guide and complete the hand-eye calibration.

Every time the camera is mounted, or the relative position of the camera and the robot changes after calibration, it is necessary to perform hand-eye calibration again.

Vision Project Configuration

After completing the communication configuration and hand-eye calibration, you can use Mech-Vision to configure the vision project.

The process of how to configure a vision project is shown in the figure below.

vision overall

Connect to the Camera and Capture Images

  1. Connect to the camera.

    Open Mech-Eye Viewer, find the camera to be connected, and click the Connect button.

    vision click connect camera
  2. Adjust camera parameters.

    To ensure that the captured 2D image is clear and the point cloud is intact, you need to adjust the camera parameters. For detailed instructions, please refer to LSR L Camera Parameter Reference.

  3. Capture images.

    After the camera is successfully connected and the parameter group is set, you can start capturing the target object images. Click the vision click capture icon button on the top to capture a single image. At this time, you can view the captured 2D image and point cloud of the target object. Ensure that the 2D image is clear, the point cloud is intact, and the edges are clear. The qualified 2D image and point cloud of the target object are shown on the left and right in the figure below respectively.

    vision image and cloud
  4. Connect to the camera in Mech-Vision.

    Select the Capture Images from Camera Step, disable the Virtual Mode option, and click the Select camera button.

    vision select camera

    In the pop-up window, click the vision connect camera before icon icon on the right of the camera serial number. When the icon turns into vision connect camera after icon, the camera is connected successfully. After the camera is connected successfully, you can select the camera calibration parameter group in the drop-down list on the right, as shown below.

    vision connect camera

    Now that you have connected to the real camera, you do not need to adjust other parameters. Click the vision run step camera icon icon on the Capture Images from Camera Step to run the Step. If there is no error, the camera is connected successfully and the images can be captured properly.

3D Target Object Recognition (to Recognize Target Object)

This solution uses the 3D Target Object Recognition Step to recognize target objects. Click the Config wizard button in the Step Parameters panel of the 3D Target Object Recognition Step to open the 3D Target Object Recognition tool to configure relevant settings. The overall configuration process is shown in the figure below.

vision 3d target object recognition overall
Point Cloud Preprocessing

Before point cloud preprocessing, you need to preprocess the data by adjusting the parameters to make the original point cloud clearer, thus improving the recognition accuracy and efficiency.

  1. Set recognition region.

    Set an effective recognition area to block out interference factors and improve recognition efficiency.

  2. Adjust parameters.

    Set the Edge extraction effect, Noise removal level, and Point filter parameters to remove noise.

After point cloud preprocessing, click the Run Step button.

vision point cloud preprocessing effect
Recognize Target Object

After point cloud preprocessing, you need to create a point cloud model for the target object in the Target Object Editor, and then set matching parameters in the 3D Target Object Recognition tool for point cloud model matching.

  1. Create a target object model.

    Click the Open target object editor button to open the editor, import the STL file to generate a point cloud model for the target object.

  2. Set parameters related to object recognition.

    • Enable Advanced mode on the right side of Recognize target object.

    • Matching mode: Enable Auto-set matching mode. Once enabled, this Step will automatically adjust the parameters under Coarse matching Settings and Fine matching settings.

    • Extra fine matching: Enable extra fine matching to perform a second fine matching with the surface model on the matching result, improving the picking accuracy in the Z-direction.

    • Confidence settings: Set Confidence strategy to Manual, Joint scoring strategy to Consider both surface and edge, and set Surface matching confidence threshold to a high value, such as 0.8, to remove incorrect matching results.

    • Output—Max outputs: Minimize the number of outputs to reduce matching time, while ensuring that path planning requirements are met. In this solution, the Max outputs parameter is set to 15.

    • Remove coinciding poses and remove overlapped poses: To remove coinciding and overlapping recognition results, enable the Remove poses of coinciding objects and Remove poses of overlapped objects options, and set their respective thresholds to 30% and 20%.

After setting the above parameters, click the Run Step button. The matching result is shown in the figure below.

vision target object recognition effect
Configure Step Ports

After target object recognition, Step ports should be configured to provide vision results and point clouds for Mech-Viz for path planning and collision detection.

To ensure that objects can be successfully picked by the robot, you need to adjust the center point of the target object so that its Z-axis points upwards. Under Select port, select Port(s) related to object center point, and select the Preprocessed point cloud option. Then click the Save button. A new output port is added to the 3D Target Object Recognition Step, as shown below.

vision general settings effect

3D Target Object Recognition (to Recognize Bin)

This solution uses the 3D Target Object Recognition Step to recognize the bin. Click the Config wizard button in the Step Parameters panel of the 3D Target Object Recognition Step to open the 3D Target Object Recognition tool to configure relevant settings. The overall configuration process is shown in the figure below.

vision 3d target object recognition overall
Point Cloud Preprocessing

Before point cloud preprocessing, you need to preprocess the data by adjusting the parameters to make the original point cloud clearer, thus improving the recognition accuracy and efficiency.

  1. Set recognition region.

    Set an effective recognition area to block out interference factors and improve recognition efficiency.

  2. Adjust parameters.

    Set the Edge extraction effect, Noise removal level, and Point filter parameters to remove noise.

After point cloud preprocessing, click the Run Step button.

vision bin point cloud preprocessing effect
Recognize Target Object

After point cloud preprocessing, you need to create a point cloud model for the bin in the Target Object Editor, and then set matching parameters in the 3D Target Object Recognition tool for point cloud model matching.

  1. Create a target object model.

    Create point cloud model and add the pick point. Click the Open target object editor button to open the editor, generate a point cloud model based on the point cloud acquired by the camera.

  2. Set parameters related to object recognition.

    • Matching mode: Enable Auto-set matching mode.

    • Confidence settings: Set the Confidence threshold to 0.7 to remove incorrect matching results.

    • Output—Max outputs: Since the target object is a bin, set the Max outputs to 1.

After setting the above parameters, click the Run Step button. The matching result is shown in the figure below.

vision bin recognition effect
Configure Step Ports

After target object recognition, Step ports should be configured to provide vision results and point clouds for Mech-Viz for path planning and collision detection.

To obtain the position information of the real bin, select the Port(s) related to object center point option under Select port, and click the Save button. New output ports are added to the 3D Target Object Recognition Step, as shown below.

vision bin general settings effect

Adjust Poses (Target Object Poses)

After obtaining the target object poses, you need to use the Adjust Poses V2 Step to adjust the poses. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the pose adjustment tool for pose adjustment configuration. The overall configuration process is shown in the figure below.

vision adjust poses overall
  1. Transform poses.

    To output the target object poses in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the poses from the camera frame to the robot frame.

  2. Adjust pose orientations.

    Set Orientation to Point to reference point and Pointing axis to Z-axis, which enables the robot to pick target objects in the specified direction to avoid collisions.

  3. Sort poses.

    Set the Sorting type to Sort by X/Y/Z value of pose, set Specified value of the pose to Z-coordinate, and sort the poses in Descending order.

  4. Filer by angle.

    To reduce the time required for subsequent path planning, target objects that cannot be easily picked need to be filtered based on the angle between the Z-axis of the pose and the reference direction. In this tutorial, you need to set the Max angle difference to 90°.

  5. General settings.

    Set number of new ports to 1, and a new input and output port will be added to the Step. Connect the input port to the Target Object Names output port of the 3D Target Object Recognition Step and connect the output port to the Output Step.

Adjust Poses (Bin Poses)

After obtaining the bin pose, you need to use the Adjust Poses V2 Step to adjust the pose. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the pose adjustment tool for pose adjustment configuration. The overall configuration process is shown in the figure below.

vision adjust bin poses overall
  1. Select pose processing strategy.

    Since the target object is a deep bin holding target objects, please select the Bin option.

  2. Transform poses.

    To output the bin pose in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the pose from the camera frame to the robot frame.

  3. Translate poses along specified direction.

    In the Robot reference frame, move the bin pose along the Positive Z-direction and manually adjust the Translation distance to -285 mm to move the bin pose from the top surface of the bin down to the bin center, which will be used to update the bin collision model in Mech-Viz later.

    Translation distance = -1 × 1/2 Bin height
  4. Sort poses.

    Set the Sorting type to Sort by X/Y/Z value of pose, set Specified value of the pose to Z-coordinate, and sort the poses in Descending order.

  5. Filer by angle.

    To reduce the time required for subsequent path planning, target objects that cannot be easily picked need to be filtered based on the angle between the Z-axis of the pose and the reference direction. In this tutorial, you need to set the Max angle difference to 90°.

  6. General settings.

    Set number of new ports to 1, and a new input and output port will be added to the Step. Connect the input port to the Target Object Names output port of the 3D Target Object Recognition Step and connect the output port to the Output Step.

Output Object Information

Use the Output Step to output the information of the object center point, preprocessed point cloud, target object name, bin name, bin pose, etc., to Mech-Viz for path planning.

Path Planning

Once the target object recognition is complete, you can use Mech-Viz to plan a path and then write a robot program for picking the target objects.

The process of path planning configuration is shown in the figure below.

viz overall
Configure Scene Objects

Scene objects are introduced to make the scene in the software closer to the real scenario, which facilitates the robot path planning. For detailed instructions, please refer to Configure Scene Objects.

To ensure effective picking, scene objects should be configured to accurately represent the real operating environment. The scene objects in this solution are configured as shown below.

viz scene objects configuration effect
Configure Robot Tool

The end tool should be configured so that its model can be displayed in the 3D simulation area and used for collision detection. For detailed instructions, please refer to Configure Tool.

  • To save time when creating a collision model for the end tool, it’s not always necessary for the convex hulls you create to replicate every detail of the original model. You can omit certain details based on the specific requirements of the model.

  • For parts that make direct contact with the target object during picking, it is important to faithfully reproduce their shapes to guarantee the accuracy of collision detection. For mechanical structures that are farther away from the pick point (target object), the design can be simplified by using cuboid convex hulls instead of complex structural designs to improve efficiency. The figure below shows the original model on the left and the simplified model on the right.

    viz end tool configuration effect
Adjust the Workflow

The workflow refers to the robot motion control program created in Mech-Viz in the form of a flowchart. After the scene objects and end tools are configured, you can adjust the project workflow according to the actual requirements. The flowchart of the logical processing when picking the target object is shown below.

viz adjust workflow overall

Examples of the successful robot picking are shown below:

  1. Pick on the front of the target object with the front of the tool:

    viz normal picking 1
  2. Pick on the corner of the target object with the front of the tool:

    viz normal picking 2
  3. Pick on the front of the target object with one side of the tool:

    viz normal picking 3
  4. Pick on the side of the target object with one side of the tool:

    viz normal picking 4

When Standard Interface communication is used, the workflow of the project is shown below.

viz adjust workflow non master
Simulate and Test

Click the Simulate button on the toolbar to test whether the vision system is set up successfully by simulating the Mech-Viz project.

Place the target object randomly in the bin and click Simulate in the Mech-Viz toolbar to simulate picking the target object. After each successful picking, the target object should be rearranged, and 10 simulation tests should be conducted. If the 10 simulations all lead to successful pickings, the vision system is successfully set up.

If an exception occurs during simulation, refer to the Solution Deployment FAQs to resolve the problem.

Robot Picking and Placing

Write a Robot Program

If the simulation result meets expectations, you can write a pick-and-place program for the ABB robot.

The example program Kawasaki example programs for picking can basically meet the requirements of this typical case. You can modify the example program. For a detailed explanation of the Kawasaki example programs for picking, please refer to Example Program Explanation.

Modification Instruction

Based on the example program, please modify the program files by following these steps:

  1. Define the TCP.

    Before modification After modification (example)
      TOOL gripper ;set TCP
      point tcp1 = trans(0,37.517,390.13,-15,0,0)
      TOOL tcp1 ;set TCP
  2. Set the DO port to add tool control logic to initialize the tool status.

    Before modification After modification (example)
      /
      signal 10,-9;set do off
  3. Specify the IP address and port number of the IPC. Change the IP address and port number in the MM_Init_Socket command to those in the vision system.

    Before modification After modification (example)
      ;Set ip address of IPC
      call mm_init_skt(127,0,0,1,50000)
      ;Set ip address of IPC
      call mm_init_skt(128,1,1,2,60000)
  4. Trigger the Mech-Viz project to run, switch to branch 3 to reset palletizing records, and then switch to branch 1 to start visual recognition. Then determine whether branch 2 needs to be used for visual recognition according to the status code indicating whether the planned path is successfully obtained fromMech-Viz.

    Before modification After modification
      ;Run Viz project
      call mm_start_viz(1,#start_viz) ;(2,#start_viz) used for ETH viz initial position
      twait 0.1
      ;set branch exitport
      ;call mm_set_branch(1,1)
      ;get planned path
      call mm_get_vizdata(2,pos_num,vispos_num,ret1)
      ;Init Palletizing
      CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
      TWAIT 0.1
      call mm_set_branch(7,2);init Palletizing
      TWAIT 0.1
      CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
      TWAIT 0.1
      call mm_set_branch(7,1)
     10 CALL mm_get_vizdata(1,pos_num,vispos_num,ret1)
  5. Move the robot along the planned path to the pick point, and set the DO port to add a signal to close the gripper to pick the target object.

    Before modification After modification (example)
      ;follow the planned path to pick
      for count =1 to pos_num
        speed speed[count]
        LMOVE movepoint[count]
        if count == vispos_num then
            ;add object grasping logic here
      ;follow the planned path to pick
      JMOVE #movepoint[1]
      JMOVE #movepoint[2]
      JMOVE #movepoint[3]
      JMOVE #movepoint[4]
      LMOVE #movepoint[5]
      BREAK
      signal 9,-10;set do on
      TWAIT 0.2
      LMOVE #movepoint[6]
      JMOVE #movepoint[7]
      JMOVE #movepoint[8]
  6. Trigger the Mech-Viz project and switch to branch 1 to capture images in advance.

    Before modification After modification
      /
      CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
      TWAIT 0.1
      call mm_set_branch(7,1)
  7. Determine the next movement according to the result of “flag”, and finally move the robot to the placing waypoint.

    Before modification After modification
      /
    ;go to drop location
        JMOVE #movepoint[9]
        JMOVE #movepoint[10]
        break
        twait 0.2
        JMOVE #movepoint[11]
  8. Set the DO port to perform placing. Note that the DO command should be set according to the actual DO port number used on site.

    Before modification After modification (example)
      ;add object releasing logic here
      ;signal 10,-9;set do on
Reference: Modified Example Program
.PROGRAM vision_sample_2()
;---------------------------------------------------------
;* FUNCTION:simple pick and place with Mech-Viz
;* mechmind
;---------------------------------------------------------
  accuracy 1 always
  speed 30 always
  point tcp1 = trans(0,37.517,390.13,-15,0,0)
  TOOL tcp1 ;set TCP
  signal 10,-9;set do off
  Home ;move robot home position
  JMOVE camera_capture ;move to camera_capture position
  break
  pos_num = 0
  ;Set ip address of IPC
  call mm_init_skt(128,1,1,2,60000)
  twait 0.1
  ;Set vision recipe
  ;call mm_switch_model(1,1)
  ;Init Palletizing
  CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
  TWAIT 0.1
  call mm_set_branch(7,2);init Palletizing
  TWAIT 0.1
  CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
  TWAIT 0.1
  call mm_set_branch(7,1)
  10 CALL mm_get_vizdata(1,pos_num,vispos_num,ret1)
  if ret1 <> 2100
    halt
  end
  for count=1 to pos_num
    call mm_get_pose(count,&movepoint[count],label[count],speed[count])
  end
  ;follow the planned path to pick
  JMOVE #movepoint[1]
  JMOVE #movepoint[2]
  JMOVE #movepoint[3]
  JMOVE #movepoint[4]
  LMOVE #movepoint[5]
  BREAK
  signal 9,-10;set do on
  TWAIT 0.2
  LMOVE #movepoint[6]
  JMOVE #movepoint[7]
  JMOVE #movepoint[8]
  CALL mm_start_viz(2,#start_viz);(2,#start_viz) used for ETH viz initial position
  TWAIT 0.1
  call mm_set_branch(7,1)

;go to drop location
      JMOVE #movepoint[9]
      JMOVE #movepoint[10]
      break
      twait 0.2
      JMOVE #movepoint[11]
    end
  ;signal 10,-9;set do on
  HOME
END

Picking Test

To ensure stable production in the actual scenario, the modified example program should be run to perform a picking test with the robot. For detailed instructions, please refer to Test Standard Interface Communication.

Before performing the picking test, please teach the following waypoints.

Name Variable Description

Tool Center Point

TCP

Defined by the pose variable “gripper.” Please use the teach pendant to teach.

Home position

home

The taught initial position. The initial position should be away from the objects to be picked and surrounding devices, and should not block the camera’s field of view.

Image-capturing position

camera_capture

The taught image-capturing position. The image-capturing position refers to the position of the robot where the camera captures images. At this position, the robot arm should not block the camera’s FOV.

Placing waypoint

movepoint[11]

The position for placing the target object.

After teaching, arrange the target objects as shown in the table below, and use the robot to conduct picking tests for all arrangements at a low speed.

The picking tests can be divided into three phases:

Phase 1: Test with Single Target Object

Object placement status

Illustration

Target object placed in the left-right direction in the middle of the bin

picking test 1

Target object placed in the top-down direction in the middle of the bin

picking test 2

Target object placed vertically in the middle of the bin

picking test 3

The target object is placed in the corner of the bin

picking test 4
picking test 5
Phase 2: Interference Test with Neighboring Target Objects

Object placement status

Illustration

Target objects fitted closely with each other’s curved surfaces in the middle of the bin

picking test 6

Target objects placed in the middle of the bin, with their ends closely aligned

picking test 7
Phase 3: Test in Real Scenario

Object placement status

Illustration

Similar to the real scenario, the target objects are randomly stacked.

picking test 8

If the robot successfully picks the target object(s) in the test scenarios above, the vision system can be considered successfully deployed.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.