Getting Started Tutorial: Vision-Guided Robotic Picking of Metal Parts (Master-Control Communication)

You are currently viewing the documentation for version 1.8.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

In this tutorial, you will learn how to deploy a simple 3D vision-guided robotic application of picking small metal parts in the Master-Control communication mode.

Application Overview

  • Camera: Mech-Eye PRO M camera, mounted in eye to hand (ETH) mode

  • Calibration board: When the working distance is 1000 ~1500 mm, it is recommended to use the calibration board model CGB-035; when the working distance is 1500~2000 mm, it is recommended to use the calibration board CGB-050

  • Robot: ABB_IRB_1300_11_0_9

  • Workpiece: track links (made of metal)

    • For this application, you are required to prepare a model file in CAD format for the workpiece, which will be used to generate the point cloud matching model. You can download it by clicking here.

    • This application uses a real camera to capture images of the track links for recognition. If you want to use a virtual camera, please click here to download image data of the track links.

  • End tool: gripper

    For this application, you are required to prepare a model file in OBJ format for the gripper, which will be used for collision detection during path planning. You can download it by clicking here.

  • Scene object: scene object model

    This application requires a scene model file in STL format, which is used to simulate a real scene and is used for collision detection in path planning. You can download it by clicking here.

  • IPC: Mech-Mind IPC STD

  • Used software: Mech-Vision 1.8.2, Mech-Viz 1.8.2, Mech-Eye Viewer 2.3.1

  • Communication mode: Master-Control communication

If you are using a different camera model, robot brand, or workpiece than in this example, please refer to the reference information provided in the corresponding steps to make adjustments.

Deploy a Vision-Guided Robotic Application

The deployment of the vision-guided robotic application can be divided into six phases, as shown in the figure below:

getting start deployment

The following table describes the six phases of deploying a vision-guided robotic application.

No. Phase Description

1

Vision Solution Design

Select the hardware model according to the project requirements, determine the mounting mode, vision processing mode, etc. (This tutorial has a corresponding vision solution, and you do not need to design it yourself.)

2

Vision system hardware setup

Install and connect hardware of the Mech-Mind Vision System.

3

Robot communication setup

Load the robot master-control program and the configuration files to the robot system and set up the communication between the vision system and the robot, thus helping the Mech-Mind Vision System obtain control over the robot.

4

Hand-eye calibration

Perform the automatic hand-eye calibration in the eye-to-hand setup, to establish the transformation relationship between the camera reference frame and the robot reference frame.

5

Workpiece locating

Use the “General Workpiece Recognition” case project to calculate the workpiece poses and output the vision result.

6

Pick and place

Use Mech-Viz to create a workflow that can guide the robot to repeatedly pick and place workpieces.

Next, follow subsequent sections to complete the application deployment.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.