Jog the Robot to Set Pick Points

You are currently viewing the documentation for the latest version (2.1.2). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

When you want to set pick points by Robot jogging, the overall configuration workflow is as follows.

overview
  1. Teach the pick point: Add the pick point to the target object by jogging the robot.

  2. Import project information: Select the project and source of point cloud, and the point cloud model can be generated automatically.

  3. Edit model: Edit the generated point cloud model, including the calibration of the object center point and configuration of the point cloud model, to ensure better performance of the 3D matching.

  4. Set pick point: Add pick points or the pick point array on the edited point cloud model.

  5. Collect data for drift correction: For solutions deployed with the “Auto-correct accuracy drift in vision system” feature, additional drift correction data must be collected to ensure pick points remain accurate and feasible after correction.

  6. Set collision model (optional): Generate the collision model for collision detection during path planning.

The following sections provide detailed instructions on the configuration.

Teach the Pick Point

  1. Place the target object within the camera’s field of view and ensure that the robot can pick the target object properly.

  2. Use the teach pendant to control the robot to accurately reach the expected pick point of the target object.

  3. Move the TCP of the robot as close as possible to the center of the point cloud model, thus reducing the picking error.

  4. Record and enter the robot flange pose and TCP (tool pose relative to the robot flange) at the pick point in the parameter configuration section on the right.

  5. Use the teach pendant to control the robot to move away from the pick point, ensuring that the position of the target object remains unchanged during the departure process.

  6. Collect the point cloud of the target object and create a point cloud model.

Import Project Information

Configure Project Settings

To obtain information from the project, select the corresponding project, the Step port outputting the surface/edge point cloud, the reference frame to define point cloud, and the camera. Then click Preview to view the point cloud model in the visualization area on the left.

When selecting the reference frame for defining the point cloud, ensure that it matches the reference frame of the scene point cloud used in the subsequent 3D matching Step.

Now the project information has been import. Click Next to edit the generated point cloud model.

Edit Point Cloud Model

The generated point cloud model should be edited for better performance in the subsequent 3D matching.

Edit Point Cloud

If there are interference points around the point cloud model, you can remove the interference points by editing the point cloud. Refer to Edit Point Cloud for detailed instructions.

Calibrate Object Center Point

After an object center point is automatically calculated, you can calibrate it based on the actual target object in use. Select a calculation method under Calibrate center point by application, and click Start calculating to calibrate the object center point.

Method Description Operation Applicable target object

Re-calculate by using original center point

The default calculation method. Calculate the object center point according to the features of the target object and the original object center point.

Select Re-calculate by using original center point, and click the Start calculating button.

Calibrate to center of symmetry

Calculate the object center point according to the target object’s symmetry.
After calculating the symmetry of the current model, the object center point will be set to the center of symmetry. There may be deviations in the calculation. If deviations occur, please adjust the object center point manually.

Select Calibrate to center of symmetry and click the Start calculating button.

Symmetrical target objects
calibrate to center of symmetry example

Calibrate to center of feature

Calculate the object center point according to the selected Feature type and the set 3D ROI.

  1. Select the Feature type according to the geometric features of the object, and the tool will calculate the object center point according to the feature type.

  2. (Optional) Enable Use 3D ROI and select the geometric features on the target object with the 3D ROI.

  3. Click the Start calculating button.

Target objects with obvious geometric features
calibrate to center of feature example

Configure Point Cloud Model

To better use the point cloud model in the subsequent 3D matching and enhance matching accuracy, the tool provides the following two options for configuring the point cloud model. You can enable the Configure point cloud model feature as needed.

Calculate Poses to Filter Matching Result

Once Calculate poses to filter matching result is enabled, more matching attempts will be made based on the settings to obtain matching results with higher confidence. However, more matching attempts will lead to longer processing time.

Two methods are available: Auto-calculate unlikely poses and Configure symmetry manually. In general, Auto-calculate unlikely poses is recommended. See the following for details.

Method Description Operation

Auto-calculate unlikely poses

Poses that may cause false matches will be calculated automatically. In subsequent matches, poses that successfully match these poses will be considered unqualified and filtered out.

  1. ClickCalculate unlikely poses to calculate potentially mismatched poses.

  2. (Optional) If you think a pose in the list will not cause a false match, click the × icon to remove it from the list.

Configure symmetry manually

For rotationally symmetric target objects, configuring the rotational symmetry of the point cloud model can prevent the robot’s end tool from unnecessary rotations when it is holding the target object. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly.

Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range.

When this feature is enabled, you should configure the relevant parameters in the subsequent matching Steps to activate the feature. See the following for details.

  • If the “3D Matching” Step is used, go to Adjust or Filter Poses from Coarse Matching  Select Strategy, and select Filter potentially false poses in the drop-down menu. This parameter will appear when the parameter tuning level is set to Advanced or Expert.

  • If the “3D Target Object Recognition” Step is used, navigate to the “Target object selection and recognition” process, locate Adjust or filter poses from coarse matching  Select strategy, and select Filter potentially false poses. This parameter will appear when the Advanced mode switch is turned on.

Set Weight Template

During target object recognition, setting a weight template highlights key features of the target object, improving the accuracy of matching results. The weight template is typically used to distinguish target object orientation. The procedures to set a weight template are as follows.

A weight template can only be set when the Point cloud display settings is set to Display surface point cloud only.

  1. Click Edit template.

  2. In the visualization area, hold and press the right mouse button to select a part of the features on the target object. The selected part, i.e., the weight template, will be assigned more weight in the matching process.

    By holding Shift and the right mouse button together, you can set multiple weighted areas in a single point cloud model.

    set weight template
  3. Click Apply to complete setting the weight template.

For the configured weight template to take effect in the subsequent matching, go to the “Model Settings” parameter of the “3D Matching” Step, and select the model with properly set weight template. Then, go to “Pose Filtering” and enable Consider Weight in Result Verification. The “Consider Weight in Result Verification” parameter will appear after the “Parameter Tuning Level” is set to Expert.

Now the editing of the point cloud model is completed. Click Next to collect data for drift correction or directly set the pick point.

Collect Data for Drift Correction

The pick points added by jogging the robot are accurate and reliable. Even when the accuracy drift occurs in the vision system, these pick points will not be affected. In this case, if the drift correction feature in the vision system is enabled, the accuracy of the pick points will be reduced.

If the auto-correction feature has already been deployed, you need to collect data for drift correction after editing the model. The calculated drift compensation will be used for reverse compensation to ensure that the pick points added by jogging the robot remain accurate and feasible.

  1. Auto-correct accuracy drift in EIH vision system or auto-correct accuracy drift in ETH vision system.

    • If more than one day has passed since the last drift correction, rerun the robot program to obtain new drift correction data.

    • If the last drift correction was performed within one day, you may skip the correction process and directly click the Start check button.

  2. Check the program running result.

    After running the robot program, click the Start check button to check the running result of the robot program.

If the check passes, it indicates that valid drift correction data has been generated after running the robot auto-correction program.

If the check fails, it indicates that the robot auto-correction program did not generate valid drift correction data. In this case, it is recommended to rerun the robot auto-correction program to capture the calibration sphere poses.

After collecting the drift correction data, click Next to set the pick point.

Set Pick Point

Adjust Pick Point

By default, the pick point list displays the added pick points, defined in the reference frame with the object center point as the origin. Changing the object center point will affect the pick points. You can adjust the default pick points or add new pick points.

  • Adjust default pick points

    If the automatically generated pick point does not meet the application requirements, you can customize the values in “Pick point settings” or manually drag the pick point in the visualization area.

  • Add new pick points

    If the target object has multiple pick points, click the Add button to add new pick points.

    Taking square tubes as an example, the magnetic gripper can pick from the sides, ends, and edges. Therefore, you can add pick points at these positions.

    configure multi pick point 1

    After adding pick points, you can drag the pick points in the pick point list to adjust the priority. The points higher in the list will be considered first during actual picking.

Set Pick Point Array

When the target object is symmetrical, you can set the pick point array based on the object center point according to actual requirements. Setting the pick point array can prevent the robot’s end tool from unnecessary rotations during picking. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly. The procedures for setting are as follows.

  1. Under “Pick point settings,” click Generate next to Pick point array.

  2. Refer to Rotational Symmetry of Target Objects to select the axis of symmetry, and then set the Order of symmetry and Angle range.

  3. (Optional) Make vision result contain pick point arrays.

    If disabled, Mech-Viz or the path planning tool will generate pick point arrays based on the settings in the target object editor and plan the path according to the pick points in the array. If enabled, Mech-Vision will output pick point arrays based on the settings in the target object editor, and Mech-Viz or the path planning tool will use the pick points in the array to plan the path.

    • If you hope pick point arrays can be generated and outputted before path planning, you should enable the option.

    • If you hope pick point arrays can be generated after path planning, you should disable the option.

    In real situations, you can decide to enable or not to enable this option based on project requirements and the system performance. For instance, in complex scenarios, it is generally recommended to enable this option to filter out invalid pick points before path planning and output optimized pick point arrays, so as to improve the overall efficiency.

Taking a round tube as an example, the settings of the pick point array are as follows.

configure pick point array 1

In practice, pick points with a downward Z-axis are often invalid and will affect path planning. Therefore, you should narrow down the Angle range. It is generally recommended to keep the range within ±90°. For example, when configuring a pick point array for randomly placed round tubes, the angle range value is set to ±30° in the figure below.

configure pick point array 2

Add Picking Configuration

Preview Picking Effect

If a tool has been configured in the path planning tool or Mech-Viz, you can enable it in the target object editor to preview the positional relationship between the pick point and the tool during actual picking. This helps determine whether the pick point settings are appropriate. The detailed instructions are as follows.

  1. Add an end tool.

    Add an end tool and set the TCP in the path planning tool.

  2. Preview and enable the tool.

    Once the end tool is added, the tool information will be automatically updated in the tool list within the target object editor. You can select a tool from the tool list based on your actual needs and preview the positional relationship between the pick point and the tool in the visualization area during actual picking (as shown in the figure below).

    If the tool is modified in the path planning tool, please save the changes in the path planning tool to update the tool list in the target object editor.
    configure picking example

Configure Translational and Rotational Relaxation for Tools

In practice, to ensure the tool can still pick the target object after translating or rotating along a certain axis of the pick point, you can configure the translational relaxation and rotational relaxation for the tool in the target object editor.

Take the round tube as an example, the tool can be translated along the X-axis of the pick point while picking.

configure picking relaxation 1

The corresponding configuration is shown below.

configure picking relaxation 2

Set the Pick Point Selection Strategy

Minimum tool rotation will be used by default, and you can select a pick point selection strategy based on the actual requirements.

  • Minimum tool rotation: When this strategy is selected, the pick point that results in the smallest rotation of the tool’s Z-axis during the entire pick-and-place process will be selected with the highest priority. This strategy can prevent the tool from rotating in vain after picking the target object and avoid dropping the picked target object.

  • Minimum difference between tool and vision pose: When this strategy is selected, the pick point with the smallest angle difference from the target object pose will be selected with the highest priority.

  • Minimum collision between tool and point cloud: When this strategy is selected, the pick point that causes the least collision between the tool and the target object point clouds will be selected with the highest priority.

Click Save to save the configurations for the target object. To set the collision model, click Next.

Set Collision Model (Optional)

Select Collision Model Generating Mode

Set Collision Model

The collision model is a 3D virtual object used in collision detection for path planning. The tool automatically recommends the collision model generating mode based on the current configuration workflow. The recommended mode for this case is Use STL model to generate point cloud cube. This tool will generate point cloud cubes based on the selected STL model for collision detection. The collision model generated in this method features high accuracy, while the collision detection speed is lower.

  1. Select the STL model.

    Click Select STL model and then select the STL model used to generate the point cloud cube.

  2. Align models.

    Aligning the collision model with the point cloud model of the target object ensures effective collision detection. You can click Auto-align point cloud model and collision model or manually adjust the pose of the collision model to achieve the alignment with the point cloud model of the target object.

Configure Symmetry of Held Target Object

Rotational symmetry is the property of the target object that allows it to coincide with itself after rotating a certain angle around its axis of symmetry. When the “Waypoint type” is “Target object pose,” configuring the rotational symmetry can prevent the robot’s tool from unnecessary rotations while handling the target object. This increases the success rate of path planning and reduces the time required for path planning, allowing the robot to move more smoothly and swiftly.

Select the symmetry axis by referring to Rotational Symmetry of Target Objects, and then set the Order of symmetry and Angle range.

Now, the collision model settings are completed. Click Save to save the target object to Solution folder\resource\workobject_library. Then the target object can be used in subsequent 3D matching Steps.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.