Solution Deployment

You are currently viewing the documentation for the latest version (2.1.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

This section introduces the deployment of the Stators solution. The overall process is shown in the figure below.

solution configuration overview

Vision System Hardware Setup

Vision system hardware setup refers to integrating the hardware (camera and industrial PC) into the actual environment to support the normal operation of the vision system.

In this phase, you need to install and set up the hardware of the vision system. For details, refer to Vision System Hardware Setup.

Robot Communication Configuration

Before robot communication configuration, it is necessary to obtain the solution first. Click here to see how to obtain the solution.
  1. Open Mech-Vision.

  2. In the Welcome interface of Mech-Vision, click Create from Solution Library to open the Solution Library.

  3. Enter the Typical cases category in the Solution Library, click the get resource icon in the upper right corner for more resources, and then click the Confirm button in the pop-up window.

  4. After acquiring the solution resources, select the Stators solution under the Neatly-arranged part picking category, fill in the Solution name and Path at the bottom, and click the Create button. Then, click the OK button in the pop-up window to download the Stators solution.

    Once the solution is downloaded, it will be automatically opened in Mech-Vision.

Before deploying a vision project, you need to set up the communication between the Mech-Mind Vision System and the robot side (robot, PLC, or host computer).

The Stators solution uses Standard Interface communication. For detailed instructions, please refer to Standard Interface Communication Configuration.

Hand-Eye Calibration

Hand-eye calibration establishes the transformation relationship between the camera and robot reference frames. With this relationship, the object pose determined by the vision system can be transformed into that in the robot reference frame, which guides the robot to perform its tasks.

Please refer to Robot Hand-Eye Calibration Guide and complete the hand-eye calibration.

Every time the camera is mounted, or the relative position of the camera and the robot changes after calibration, it is necessary to perform hand-eye calibration again.

Vision Project Configuration

After completing the communication configuration and hand-eye calibration, you can use Mech-Vision to configure the vision project.

The process of how to configure a vision project is shown in the figure below.

vision overall

Connect to the Camera and Capture Images

  1. Connect to the camera.

    Open Mech-Eye Viewer, find the camera to be connected, and click the Connect button.

    vision click connect camera
  2. Adjust camera parameters.

    To ensure that the captured 2D image is clear and the point cloud is intact, you need to adjust the camera parameters. For detailed instructions, please refer to LSR L-GL Camera Parameter Reference.

  3. Capture images.

    After the camera is successfully connected and the parameter group is set, you can start capturing the target object images. Click the vision click capture icon button on the top to capture a single image. At this time, you can view the captured 2D image and point cloud of the target object. Ensure that the 2D image is clear, the point cloud is intact, and the edges are clear. The qualified 2D image and point cloud of the target object are shown on the left and right in the figure below respectively.

    camera vision image and cloud
  4. Connect to the camera in Mech-Vision.

    Select the Capture Images from Camera Step, disable the Virtual Mode option in the Step Parameters panel, and click the Select camera button.

    vision select camera

    In the pop-up window, click the vision connect camera before icon icon on the right of the camera serial number. When the icon turns into vision connect camera after icon, the camera is connected successfully. After the camera is connected successfully, you can select the camera calibration parameter group in the drop-down list on the right, as shown below.

    vision connect camera

    Now that you have connected to the real camera, you do not need to adjust other parameters. Click the vision run step camera icon icon on the Capture Images from Camera Step to run the Step. If there is no error, the camera is connected successfully and the images can be captured properly.

3D Target Object Recognition

The Stators solution uses the 3D Target Object Recognition Step to recognize target objects. Click the Config wizard button in the Step Parameters panel of the 3D Target Object Recognition Step to open the 3D Target Object Recognition tool to configure relevant settings. The overall configuration process is shown in the figure below.

vision 3d target object recognition overall

Point Cloud Preprocessing

Before point cloud preprocessing, you need to preprocess the data by adjusting the parameters to make the original point cloud clearer, thus improving the recognition accuracy and efficiency.

  1. Set recognition region.

    Set an effective recognition area to block out interference factors and improve recognition efficiency. The ROI should include the point clouds of the target objects and tray and exclude other scene point clouds. To accommodate the positional deviation of the incoming objects, the length and width of the ROI can each be increased by 100 mm beyond the tray dimensions.

  2. Adjust parameters.

    In most cases, keep the default values of these parameters. If noise is still prevalent in the scene point cloud, try adjusting the relevant parameters to filter out the noise.

After point cloud preprocessing, click the Run Step button.

Recognize Target Object

After point cloud preprocessing, you need to create a point cloud model for the target object in the Target Object Editor, and then set matching parameters in the 3D Target Object Recognition tool for point cloud model matching.

  1. Create target object model and configure the pick point.

    • Create point cloud model. Click the Open target object editor button to open the editor, generate a point cloud model based on the image acquired by the camera.

    • Adjust relevant parameters used to distinguish the orientations of target objects.

      Please skip this step if there is no need to distinguish the orientations of the target objects.

      Avoid false matches. The shape of the target object is similar to a ring. To accurately recognize the orientation of the target object, adjust the parameters in the Avoid false matches parameter group. Since the target object is symmetrical, select the Configure symmetry manually option, turn on the Around Z-axis toggle switch, and set the Order of symmetry and Angle range to 30 and ±180, respectively.

      Set the weight template. It is recommended to Set weight template because it is difficult to recognize the orientations of target objects. In the upper-right corner of the target object editor, under Point cloud display parameter group, select the Show surface point cloud only option, and then click the Edit model button under Set weight template to set the protrusions on the ring as the weight templates, as shown in red in the figure below:

      set weight template

    • Configure the object center point and pick point. Typically, the geometric center of the target object is configured as the object center point, and then an appropriate pick point is configured according to the gripper used on site.

  2. Set parameters related to object recognition.

    • To apply the configured target object symmetry, please enable the Advanced mode switch on the right side of Recognize target object.

    • Matching mode: Disable Auto-set matching mode and set both Coarse matching mode and the Fine matching mode to Surface matching.

    • Avoid false matches: Set Adjust Poses to Filter out unlikely poses.

    • Output-Max outputs: Set this parameter value to the number of target objects when the tray is fully loaded. In this solution, the Max outputs parameter is set to 6.

After setting the above parameters, click the Run Step button to view the matching result.

Configure Step Ports

After target object recognition, step ports should be configured to provide vision results and point clouds for subsequent Steps for path planning and collision detection.

Since the pick points need to be processed in the subsequent Steps, select Port(s) related to pick point under Select port. Then, select the Original point cloud acquired by camera option, and click the Save button. New output ports will be added to the 3D Target Object Recognition Step, as shown below.

vision general settings effect

Adjust Poses

After obtaining the target object poses, you need to use the Adjust Poses V2 Step to adjust the poses. Click the Config wizard button in the Step Parameters panel of the Adjust Poses V2 Step to open the pose adjustment tool for pose adjustment configuration. The overall configuration process is shown in the figure below.

vision adjust poses overall
  1. Transform poses.

    To output the target object poses in the robot reference frame, please select the checkbox before Transform pose to robot reference frame to transform the poses from the camera reference frame to the robot reference frame.

  2. Adjust pose orientations.

    Set Orientation to Auto alignment and Application scenario to Align Z-axes (Machine tending) to ensure that the robot picks in a specified direction, thereby avoiding collisions.

  3. Sort poses.

    Set Sorting type to Sort by Z shape on plane. Since the incoming objects are in a fixed direction, set the “Reference pose” parameter to drag with pose manipulator, and make the X-axis of the reference pose parallel to the tray. Set Row direction to Positive X-axis of reference pose and Column direction to Positive Y-axis of reference pose to ensure the optimal picking sequence.

  4. Filer by angle.

    To reduce the time required for subsequent path planning, target objects that cannot be easily picked need to be filtered based on the angle between the Z-axis of the pose and the reference direction. In this tutorial, you need to set Max angle difference to 20°.

  5. General settings.

    Set number of new ports to 1, and a new input and output port will be added to the Step. Connect the input port to the Pick Point Info output port of the 3D Target Object Recognition Step and connect the output port to the Path Planning Step.

Path Planning

Once the target object recognition is complete, you can use the Path Planning Step in Mech-Vision to plan a path and then write a robot program for picking the target objects.

Click the Path Planning Step, and then click the Config wizard button to open the Path Planning Tool window.

The process of path planning configuration is shown in the figure below.

viz overall

Configure Scene Objects

Scene objects are introduced to make the scene in the software closer to the real scenario, which facilitates the robot path planning. For detailed instructions, please refer to Configure Scene Objects.

To ensure effective picking, scene objects should be configured to accurately represent the real operating environment. The scene objects in this solution are configured as shown below.

viz scene objects configuration effect

Configure Robot Tool

The end tool should be configured so that its model can be displayed in the 3D simulation area and used for collision detection. For detailed instructions, please refer to Configure Tool.

  • To save time when creating a collision model for the end tool, it’s not always necessary for the convex hulls you create to replicate every detail of the original model. You can omit certain details based on the specific requirements of the model.

  • For parts that make direct contact with the target object during picking, it is important to faithfully reproduce their shapes to guarantee the accuracy of collision detection. For mechanical structures that are farther away from the pick point (target object), the design can be simplified by using cuboid convex hulls instead of complex structural designs to improve efficiency. The tool used in this solution is shown below:

    viz end tool configuration effect

Adjust the Workflow

After configuring the scene objects and tools, you can adjust the workflow in the path planning tool in the Path Planning Step according to the actual requirements. The workflow for picking target objects is shown in the figure below.

viz adjust workflow overall

In the workflow, the two Above-Bin Fixed Waypoint 1 and 2 are determined in the robot program by jogging the robot and will not be sent to external devices, while the other three move-type Steps will send waypoints. In total, three waypoints will be sent.

Simulate and Test

Click the Simulate button on the toolbar to test whether the vision system is set up successfully by simulating the project in the path planning tool.

Place the target objects neatly on the tray, and then click the Simulate button on the toolbar of the path planning tool to simulate the picking process. After each successful picking, the target object should be rearranged, and 10 simulation tests should be conducted. If the 10 simulations all lead to successful pickings, the vision system is successfully set up.

If an exception occurs during simulation, refer to the Solution Deployment FAQs to resolve the problem.

Output the Vision Result

Sends the vision result of the current project to the communication component for subsequent picking.

Robot Picking and Placing

Write a Robot Program

If the simulation result meets expectations, you can write a pick-and-place program for the ABB robot.

The example program MM_S3_Vis_Path for the ABB robot can basically satisfy the requirements of this typical case. You can modify the example program. For a detailed explanation of the MM_S3_Vis_Path program, please refer to the Example Program Explanation.

Modification Instruction

Based on the example program, please modify the program files by following these steps:

  1. Specify the IP address and port number of the IPC. Change the IP address and port number in the MM_Init_Socket command to those in the vision system.

    Before modification After modification (example)
      MM_Init_Socket "127.0.0.1",50000,300;
      MM_Init_Socket "192.168.1.5",50000,400;
  2. Set the signal for the DO port to perform picking, i.e., to close the gripper and pick the target object. Note that the DO command should be set according to the actual DO port number used on site.

    Before modification After modification (example)
      !add object grasping logic here, such as "setdo DO_1, 1;"
      Stop;
      !add object grasping logic here, such as "setdo DO_2, 0;"
      setdo DO_2, 0;
  3. Set the DO port to perform placing. Note that the DO command should be set according to the actual DO port number used on site.

    Before modification After modification (example)
      !add object releasing logic here, such as "setdo DO_1, 0;"
      Stop;
      !add object releasing logic here, such as "setdo DO_2, 1;"
      setdo DO_2, 1;

Reference: Modified Example Program

MODULE MM_S3_Vis_Path
!----------------------------------------------------------
! FUNCTION: trigger Mech-Vision project and get planned path
! Mech-Mind, 2023-12-25
!----------------------------------------------------------
!define local num variables
LOCAL VAR num pose_num:=0;
LOCAL VAR num status:=0;
LOCAL VAR num toolid{5}:=[0,0,0,0,0];
LOCAL VAR num vis_pose_num:=0;
LOCAL VAR num count:=0;
LOCAL VAR num label{5}:=[0,0,0,0,0];
!define local joint&pose variables
LOCAL CONST jointtarget home:=[[0,0,0,0,90,0],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]];
LOCAL CONST jointtarget snap_jps:=[[0,0,0,0,90,0],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]];
LOCAL PERS robtarget camera_capture:=[[302.00,0.00,558.00],[0,0,-1,0],[0,0,0,0],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]];
LOCAL PERS robtarget drop_waypoint:=[[302.00,0.00,558.00],[0,0,-1,0],[0,0,0,0],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]];
LOCAL PERS robtarget drop:=[[302.00,0.00,558.00],[0,0,-1,0],[0,0,0,0],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]];
LOCAL PERS jointtarget jps{5}:=
[
    [[-9.7932,85.483,6.0459,-20.5518,-3.0126,-169.245],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]],
    [[-9.653,95.4782,-4.3661,-23.6568,-2.6275,-165.996],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]],
    [[-9.653,95.4782,-4.3661,-23.6568,-2.6275,-165.996],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]],
    [[-9.653,95.4782,-4.3661,-23.6568,-2.6275,-165.996],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]],
    [[-9.7932,85.483,6.0459,-20.5518,-3.0126,-169.245],[9E+9,9E+9,9E+9,9E+9,9E+9,9E+9]]
];
!define local tooldata variables*LOCAL PERS tooldata gripper1:=[TRUE,[[0,0,0],[1,0,0,0]],[0.001,[0,0,0.001],[1,0,0,0],0,0,0]];

PROC Sample_3()
    !set the acceleration parameters
    AccSet 50, 50;
    !set the velocity parameters
    VelSet 50, 1000;
    !move to robot home position
    MoveAbsJ home\NoEOffs,v3000,fine,gripper1;
    !initialize communication parameters (initialization is required only once)
    MM_Init_Socket "192.168.1.5",50000,400;
    !move to image-capturing position
    MoveL camera_capture,v1000,fine,gripper1;
    !open socket connection
    MM_Open_Socket;
    !trigger NO.1 Mech-Vision project
    MM_Start_Vis 1,0,2,snap_jps;
    !get planned path from NO.1 Mech-Vision project; 2nd argument (1) means getting pose in JPs
    MM_Get_VisPath 1,1,pose_num,vis_pose_num,status;
    !check whether planned path has been got from Mech-Vision successfully
    IF status<>1103 THEN
        !add error handling logic here according to different error codes
        !e.g.: status=1003 means no point cloud in ROI
        !e.g.: status=1002 means no vision results
        Stop;
    ENDIF
    !close socket connection
    MM_Close_Socket;
    !save waypoints of the planned path to local variables one by one
    MM_Get_Jps 1,jps{1},label{1},toolid{1};
    MM_Get_JPS 2,jps{2},label{2},toolid{2};
    MM_Get_JPS 3,jps{3},label{3},toolid{3};
    !follow the planned path to pick
    !move to approach waypoint of picking
    MoveAbsJ jps{1},v1000,fine,gripper1;
    !move to picking waypoint
    MoveAbsJ jps{2},v1000,fine,gripper1;
    !add object grasping logic here, such as "setdo DO_2, 0;"
    setdo DO_2, 0;
    !move to departure waypoint of picking
    MoveAbsJ jps{3},v1000,fine,gripper1;
    !move to intermediate waypoint of placing
    MoveJ drop_waypoint,v1000,z50,gripper1;
    !move to approach waypoint of placing
    MoveL RelTool(drop,0,0,-100),v1000,fine,gripper1;
    !move to placing waypoint
    MoveL drop,v300,fine,gripper1;
    !add object releasing logic here, such as "setdo DO_2, 1;"
    setdo DO_2, 1;
    !move to departure waypoint of placing
    MoveL RelTool(drop,0,0,-100),v1000,fine,gripper1;
    !move back to robot home position
    MoveAbsJ Home\NoEOffs,v3000,fine,gripper1;
ENDPROC
ENDMODULE

Picking Test

To ensure stable production in the actual scenario, the modified example program should be run to perform the picking test with the robot. For detailed instructions, please refer to Test Standard Interface Communication.

Before performing the picking test, please teach the following waypoints.

Name Variable Description

Home position

home

The taught initial position. The initial position should be away from the objects to be picked and surrounding devices, and should not block the camera’s field of view.

Input the pose of the Mech-Vision project

snap_jps

User-defined joint positions.

Image-capturing position

camera_capture

The taught image-capturing position. The image-capturing position refers to the position of the robot where the camera captures images. At this position, the robot arm should not block the camera’s FOV.

Intermediate waypoint

drop_waypoint

Adding intermediate waypoints can ensure smooth robot motion and avoid unnecessary collisions.

Placing waypoint

drop

The position for placing the target object.

Tool data

gripper1

The tool used by the robot when it moves.

After teaching, place the target objects as illustrated in the table below, and use the robot to perform the picking test at a low speed. In this solution, if the target objects are neatly arranged and there are no abnormal concerning the incoming objects, you can directly perform picking tests in the real scenario.

Picking Test in Real Scenario

Object placement status

Illustration

Neatly arranged

picking test 1

In the above testing scenario, if the robot successfully picks the target objects, the vision system is successfully deployed.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.