UR CB-Series (Polyscope 3.9 or Above)

You are currently viewing the documentation for version 1.7.4. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

Plugin Installation and Setup

This section introduces the installation and setup of Mech-Mind 3D Vision Interface (URCap plugin) for UR CB-series.

Prerequisites

Verify that you meet the minimum required versions for Mech-Mind vision-series software and Polyscope.

To view the version of Polyscope, press About on the Polyscope home screen of the UR teach pendant.

setup robot cb
about cb

Install the URCap plugin

To install the URCap plugin, follow these steps:

  1. Find the URCap plugin file with the file extension “1.4.6.urcap” in the Mech-Center\Robot_Interface\Robot_Plugin\UR_URCAP directory of the Mech-Mind Software Suite installation path.

  2. Insert the USB drive into the UR teach pendant.

  3. On the Polyscope home screen, press Setup Robot.

    setup robot cb
  4. In the Setup Robot window, select URCaps on the left panel.

    urcaps button cb
  5. In the URCaps window, press + to navigate to the USB drive to locate the URCap plugin (.urcap file).

    urcaps screen cb
  6. In the Select URCap to install window, select the URCap plugin and press Open. The URCap plugin will be automatically installed.

    install urcaps cb
  7. Press Restart for the change to take effect.

    restart urcaps cb

Till now, the URCap plugin is successfully installed on the UR teach pendant.

After installing the URCap plugin, you also need to set the IP address of the robot (select Set Robot > Network). Note that the robot’s IP address and the IPC’s IP address must be on the same subnet.

Use Mech-Mind 3D Vision Interface

Before use, make sure that your Mech-Vision and Mech-Viz (if used) projects are ready to run, and the Mech-Mind IPC is connected to the robot’s network.

To use Mech-Mind 3D Vision Interface, you need to complete the following setup.

  1. Click Robot and Interface Configuration on the toolbar of Mech-Vision.

  2. Select Listed robot from the Select robot drop-down menu, and then click Select robot model. Select the robot model that you use, and then click Next.

  3. Select Standard Interface for Interface Type, TCP Server and ASCII for Protocol, and 50000 (Cannot be changed) for Port Number. Then click Apply.

    configure communication 1
  4. Make sure the Interface Service is started: On the toolbar of Mech-Vision, the Interface Service switch on the far right is flipped and turned to blue.

    configure communication 2

After the TCP server interface has been started in Mech-Vision, you can connect to it on the UR teach pendant.

  1. On the Polyscope home screen, press Program Robot.

  2. Select the Installation tab, and press Network Settings on the left panel. The Network Settings window of the URCap plugin is displayed.

    network settings cb
  3. Set MechMind IPC IP Address and Port No. to the IP address and port number of the Mech-Mind IPC respectively, and press Apply. The port number here and set in Mech-Vision must be 50000. Then, press Apply.

  4. Press Connection Test.

    • When the connection to the TCP server interface is established successfully, the return status should look like this:

      connection pass
    • When the connection to the TCP server interface fails to be established, the return status should be like this:

      connection failed

    The connection test is just for testing. Once connected, it will disconnect automatically. Therefore, you will see client online and offline logs on the Console tab of Mech-Vision Log panel.

Hand-Eye Calibration Using the Plugin

This section introduces the process of calibrating the camera extrinsic parameters using the URCap plugin.

Prerequisites

Before proceeding, please make sure that:

  • You have successfully connected to the TCP server interface with the URCap plugin.

  • You are familiar with the contents in Hand-Eye Calibration Guide of Mech-Vision.

Create a Calibration Program

  1. On the UR teach pendant, select Program Robot  Program, and press Empty Program.

  2. Select Structure  URCaps, press Mech-Mind Calibrate under the URCaps tab. An example program node Calibrate is automatically created under the Robot Program program tree on the left panel.

    add calibrate node cb

    The created example program node is just a template. You need to further configure the calibration program and teach the calibration start point.

Teach the Calibration Start Point

  1. Select the Calibrate node in the program tree, press the Command tab on the right panel, and set the Receive Point Type from Mech-Vision parameter to “Joint Angle” or “Flange Pose” according to the actual needs.

    set calibrate node cb
  2. Press Next on the bottom bar to proceed to the MOVEJ node, set the motion type to “MoveJ”, “MoveL” or “MoveP”, and Set TCP to Use Tool Flange on the right Move panel to ensure that the waypoint will be recorded as flange pose.

    set movej cb
  3. Manually control the robot to move to the start point for the calibration.

  4. Go back to the UR teach pendant, select the start_pose node on the left panel, and press Set Waypoint on the right Waypoint panel. You will be switched to the Move tab.

    teach startpoint cb
  5. On the Move tab, confirm that the robot’s current flange pose is proper and press OK.

    confirm waypoint cb

Run the Calibration Program

  1. Select the Robot Program program tree on the left panel, and clear the Program Loops Forever checkbox on the right Program panel to ensure that the program is run just once.

    run calibrate cb
  2. On the bottom bar, lower the robot speed to a proper value, such as 10%, for safety concerns.

  3. Press run cb on the bottom bar to run the program.

If the calibration program runs successfully, the message “Entering the calibration process, please start the calibration in Mech-Vision” will be displayed in the Console tab of Mech-Vision Log panel.

Complete Calibration in Mech-Vision

  1. In Mech-Vision, click Camera Calibration (Standard) on the toolbar, or select Camera  Camera Calibration  Standard from the menu bar.

  2. Follow the instructions based on different camera mounting methods to complete the configuration.

To save the calibration program for future use, select File  Save As to save it.

save program cb

After you complete the hand-eye calibration, you can create pick and place programs to instruct UR robots to execute vision-guided pick and place tasks.

Create Pick and Place programs

The URCap plugin provides an example Pick and Place program node for you to create pick and place programs with minimal programming efforts.

This Pick and Place program node provides two options:

  • Pick and Place with Mech-Vision: It suits scenarios where only a Mech-Vision project is used and the robot does not need Mech-Viz to plan path.

  • Pick and Place with Mech-Viz: It suits scenarios where a Mech-Viz project is used together with a Mech-Vision project to provide the collision-free motion path for the robot.

The plugin provides a program template for each option to facilitate the programming.

The following examples assume that the actually used gripper and its TCP have been set for the robot properly.

Create a Pick and Place with Mech-Vision Program

To create a Pick and Place with Mech-Vision program, follow these steps:

  1. Enable the Pick and Place with Mech-Vision option.

    1. On the UR teach pendant, select Program Robot  Program  Empty Program.

    2. Select Structure  URCaps, press the Mech-Mind Pick and Place button under the URCaps tap to add a Pick and Place program node to the Robot Program program tree on the left panel.

      add pick place node cb
    3. Select the Command tab, and press the WithMech-Vision button on the right panel.

      select pick place option cb
    4. When you see that a program template is automatically created under the Pick and Place node in the program tree, press Next on the bottom bar.

      display vision option cb
  2. At the Mech-Mind Connect node, verify that the Host IP setting is the IP address of the Mech-Mind IPC on the right Mech-Mind Connect panel.

    verify host ip vision cb
  3. Set the image-capturing pose.

    1. Manually move the robot to the proper location where Mech-Vision triggers to capture an image.

      • For Eye-In-Hand scenario, the robot should be placed above the workpiece.

      • For Eye-To-Hand scenario, the robot should not block the camera view.

    2. Go back to the teach pendant, press Next on the bottom bar to proceed to the MOVEJ node, set the motion type to “MoveJ”, “MoveL” or “MoveP”, and Set TCP to Use active TCP on the right Move panel, and press Next.

      set movej capture cb
    3. Press Set Waypoint on the right Waypoint panel. You will be switched to the Move tab.

      set waypoint capture cb
    4. On the Move tab, confirm that the robot’s current TCP is proper and press OK.

      confirm waypoint cb
    5. Once the image capturing pose is set, press Next.

  4. Trigger the Mech-Vision project to run.

    1. Set the parameters Send Current Robot Position to Mech-Vision, Mech-Vision Project ID, and Request Vision Point Number on the right Trigger Mech-Vision panel.

      trigger vision cb
      Parameter Description

      Mech-Vision Project ID

      The index of the Mech-Vision project to trigger. You can find the project ID in the Project List panel of Mech-Vision.

      Request Vision Point Number

      The number of vision points that you expect Mech-Vision to output.

      • If it is set to 0, all detected object points, but no more than 20, will be output.

      • If it is set to a number from 1 to 20, Mech-Vision will try to return the fixed number of vision points if the total detected number is greater than the number you expect.

    2. (Optional) Press Set Vision Recipe ID and a Set Recipe ID node are added under the Trigger Mech-Vision node in the program tree.

    3. Select the Set Recipe ID node in the program tree, set Mech-Vision Recipe ID on the right Set Recipe ID panel, and press Next.

      set receipe id
  5. Set how to receive Mech-Vision result.

    Select the Receive Mech-Vision Result node under the Pick and Place node in the program tree, set variable names for Pose, Label, Total Received, and Status Code, which are used to save the vision result, and then press Next.

    receive vision result cb
    Parameter Description

    Pose

    The points of detected parts is in the XYZ format. The robot can directly move to the point with active TCP. By default, the received points are saved in the array variable “pose[]”, start with array index 1.

    Label

    The object labels of detected parts. The label is an Integer. By default, the labels are saved in the array variable “label[]”, start with array index 1. Labels and Poses are one-to-one paired.

    Total Received

    The total received vision point number.

    Status code

    The returned status code. See Status Codes and Troubleshooting for reference. 11xx is normal code, and 10xx is error code. By default, the status code is saved in the variable “status_code”.

    Starting at ID

    The start index of the array variables for poses and labels. By default, the index is starting with 1.

  6. Set the pick task.

    A pick task consists of three motions: Pick_above, performs a linear approach to the pick point; Pick, picks the object, and Pick_depart, performs a linear departure away after picking.

    1. Set the parameters Dist and Coordinates for the Pick_above and Pick_depart respectively on the right Mech-Vision Pick panel, and then press Next.

      set distance coordinates cb
    2. Keep the default settings for the MoveJ node on the right Move panel, and then press Next.

    3. Keep the default settings for the pick_above node on the right Waypoint panel, and then press Next.

    4. Keep the default settings for the MoveL node on the right Move panel, and then press Next.

    5. Keep the default settings for the pick node on the right Waypoint panel, and then press Next.

    6. Add gripper control logic for picking after the pick node according to your actual conditions.

    7. Keep the default settings for the MoveJ node on the right Move panel, and then press Next.

    8. Keep the default settings for the pick_depart node on the right Waypoint panel, and then press Next.

  7. Set the place task.

    1. Press Next to proceed to the next MoveJ node.

    2. Keep the default settings for the MoveJ node on the right Move panel, and then press Next.

    3. Manually move the robot to the proper location to place the picked object.

    4. Go back to the teach pendant, and press Set Waypoint on the right Waypoint panel. You will be switched to the Move tab.

      set waypoint place cb
    5. On the Move tab, confirm that the robot’s current flange pose is proper and press OK.

      confirm waypoint cb
    6. Once the place pose is set, press Next.

    7. Add gripper control logic for placing after the place node in the program tree according to your actual conditions.

Till now, a simple pick and place program collaborating with a Mech-Vision project has been completed. You can run it by pressing run cb on the bottom bar.

Create a Pick and Place with Mech-Viz Program

To create a Pick and Place with Mech-Viz program, follow these steps:

  1. Enable the Pick and Place with Mech-Viz option.

    1. On the UR teach pendant, select Program Robot  Program  Empty Program.

    2. Select Structure  URCaps, press Mech-Mind Pick and Place under the URCaps tab. An example program node Pick and Place is automatically created under the Robot Program program tree on the left panel.

      add pick place node cb
    3. Select the Command tab, and press the With Mech-Viz button on the right panel.

      select pick place option cb
    4. When you see that a program template is automatically created under the Pick and Place node in the program tree, press Next on the bottom bar.

      display viz option cb
  2. At the Mech-Mind Connect node, verify that the Host IP setting is the IP address of the Mech-Mind IPC on the right Mech-Mind Connect panel.

    verify host ip viz cb
  3. Set the image capturing pose in reference to Step 3 in Pick and Place with Mech-Vision.

    The point is where to trigger the Mech-Viz project.

  4. Trigger the Mech-Viz project to run.

    1. Set the Send Current Robot Position to Mech-Viz parameter to “JPs&TCP” or “Dedicated Point” on the right Trigger Mech-Viz panel.

      trigger viz cb
      • If you want Mech-Viz to plan the path before the robot arrives at the start position, set the Send Current Robot Position to Mech-Viz parameter to “Dedicated Point” and specify a dedicated joint position.

      • If you use the branch task in the Mech-Viz project and want the robot to select the branch out port, press Set Branch Out Port, and proceed to Step b to set the branch out port.

      • If you use a move-class task that has the index parameter, press Set Index Value, and proceed to Step c to set the index value.

    2. (Optional) Select the Set Branch Value node in the program tree, set Branch Task ID and Branch Out Port on the right Set Branch Value panel, and press Next.

      set branch value cb
    3. (Optional) Select the Set Index Value node in the program tree, set Index Task ID and Branch Out Port on the right Set Index Value panel, and press Next.

      set index value cb
  5. Set how to receive Mech-Viz result.

    Select the Receive Mech-Viz Result node in the program tree, set variable names for Pose, Label, Speed (%), Total Received, Status Code, and Vision Point Index, which are used to save the planned path, and then press Next.

    receive viz result cb
    Parameter Description

    Pose

    The planned motion path in XYZ format. The robot can directly move to the point with active TCP. By default, the received poses are saved in the array variable “pose[]”, start with array index 1.

    Label

    The object labels of detected parts. The label is an Integer. For non-vision point, the label should be 0. By default, the labels are saved in the array variable “label[]”, start with array index 1. Labels and Poses are one-to-one paired.

    Speed

    The motion speed of the robot moving to the point in percentage.

    Total Received

    The total received vision point number.

    Status code

    The returned status code. See Status Codes and Troubleshooting for reference. 21xx is normal code, and 20xx is error code. By default, the status code is saved in the variable “status_code”.

    Vision Point Index

    The index of Vision Move in the received points. For example, Mech-Viz sends 3 points in the order of relative_move_1,visual_move_1 and relative_move_2, so the vision point index is 2. By default, the vision point index is saved in the variable “vision_point”.

    Starting at ID

    The start index of the array variables for poses and labels. By default, the index is starting with 1.

  6. Configure the motion loop, which drives the robot to follow the path planned by Mech-Viz, that is, approach the object, pick the object, and depart from the pick point (not including placing the object). For how to set MoveL and MoveJ nodes, refer to Steps 6 in Pick and Place with Mech-Vision.

    • In actual applications, the motion loop may contain several pick_above MoveJ nodes, a pick MoveL node, and several pick_depart MoveJ nodes.

    • If you change the default variable names of poses, labels, etc. in the node of Receive Mech-Viz Result, you need to change the variables used in this step.

  7. Set the place task in reference to Step 7 in Pick and Place with Mech-Vision.

Till now, a simple pick and place program collaborating with a Mech-Viz project has been completed. You can run it by pressing run cb on the bottom bar.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.