Add Pick Point by Teaching under ETH Mode + Generate Point Cloud Model from Images Captured by the Camera

You are currently viewing the documentation for version 1.8.2. To access documentation for other versions, click the "Switch Version" button located in the upper-right corner of the page.

■ To use the latest version, visit the Mech-Mind Download Center to download it.

■ If you're unsure about the version of the product you are using, please contact Mech-Mind Technical Support for assistance.

Add Pick Point by Teaching under ETH Mode

Preparation

  1. Start the robot, acquire the TCP data from the teach pendant, and record it. If you are using Mech-Viz, you can check the TCP by going into Mech-Viz Project resource tree > Tools and double-clicking the corresponding tool model.

  2. Open Matching Model and Pick Point Editor.

    1. Start Mech-Vision.

    2. Open an existing project or create a new blank project, ensuring it includes the “Capture Images from Camera” Step. If this step is not present, please add it.

    3. Click Matching Model and Pick Point Editor on the toolbar.

  3. Place the target object within the camera’s field of view.

Workflow

  1. In the initial interface of the Matching Model and Pick Point Editor, click “Capture point cloud”;

  2. In the pop-up window, click Teach the Pick Point and capture point cloud;

  3. Click Teach the Pick Point;

  4. Enter the corresponding data in the TCP field;

  5. Set picking pose

    1. Move the robot close to the target object using the teach pendant. Operate the end effector to perform a picking test, ensuring that it can securely pick the object at the pick point.

    2. In the Picking pose section, click Fetch current pose, or input the pose displayed on the teach pendant manually.

  6. Click Confirm to generate a pick point.

  7. Move the robot outside the camera’s field of view. Be careful not to touch the target object in this process to avoid altering its pose.

Generate Point Cloud Model from Images Captured by the Camera

Adjust Camera Parameters

Preparation

  1. Lighting conditions: Ensure that the lighting around the target object is even and not too dark or too bright.

  2. Target object placement: Place the target object within the camera’s field of view and ensure that its positioning highlights the key features of the target object.

  3. Background selection: Ensure that the target object is easily distinguishable from the background.

Adjust Camera Parameters

  1. Open Mech-Eye Viewer and connect the camera;

  2. model cloud generate camera get: Acquire data once;

  3. Click point cloud to check the effect.

    model cloud generate camera 1

    ×

    model cloud generate camera 2

    The target object is outside the camera’s field of view. At this time, it is necessary to place the target object within the camera’s field of view.

    ×

    model cloud generate camera 3

    The exposure time parameter is set too low. At this point, you need to increase the exposure time in the 3D parameters.

    ×

    model cloud generate camera 4

    The exposure time parameter is set too high. At this point, you need to decrease the exposure time in the 3D parameters.

    ×

    model cloud generate camera 5

    The point cloud information of the target object is incomplete. Try to slightly adjust the exposure time in the 3D parameters.

  4. Keep capturing until you acquire point clouds with relatively high quality.

Acquire Point Cloud

Preparation

  1. Start Mech-Vision.

  2. Open an existing project or create a new blank project, ensuring it includes the “Capture Images from Camera” Step. If this step is not present, please add it.

  3. Click Matching Model and Pick Point Editor on the toolbar.

Acquire Point Cloud

  1. Click the Capture point cloud button in the start screen, and select Capture point cloud in the pop-up window.

    model cloud generate camera collect 1

    When the target object is flat but shows clear and fixed edge characteristics in the images (such as panels, track shoes, connecting rods, brake discs, etc.), it is recommended to use an edge model, namely selecting “Use edge point cloud”. When the surface of the target object has many undulations (such as crankshafts, rotors, steel rods, etc.), it is recommended to use a surface model.

    Due to the complex surface features of the track link used in the example, it is recommended to create a surface point cloud model for this target object. Therefore, clear the ]Use edge point cloudbtn:[ checkbox. Then click the Capture object button to capture the depth map of the target object.

    model cloud generate camera collect 2

  2. The captured depth map of the target object and background is shown as below.

    model cloud generate camera collect 3

  3. In the upper-right corner, click the Remove background button.

    model cloud generate camera collect 4

  4. Remove the target object in the camera’s field of view first, and click Capture background again to capture a depth map of the background.

    model cloud generate camera collect 5

  5. The depth map of the background is shown as below. Then, click the Next button in the upper-right corner.

    model cloud generate camera collect 6

  6. Click Remove background to generate a point cloud model of the target object, as shown below.

    model cloud generate camera collect 7

  7. Click Finish to import the point cloud model of the target object to Matching Model and Pick Point Editor.

    model cloud generate camera collect 8

Edit Point Cloud

The model point cloud captured in the last Step may not meet the actual requirement. In such case, you need to edit the model, including removing outliers and downsampling the point cloud.

  1. Remove outliers

    Click the model cloud generate camera remove 1 button, select outliers to remove, and then click the model cloud generate camera remove 2 button to remove selected points. As shown in the figure below, selected points are outliers and can be removed by following this step.

    model cloud generate camera edit 1

  2. Point cloud downsampling

    Point cloud downsampling aims to reduce the number of points in the point cloud model, thus improving model matching efficiency. Click the model cloud generate camera remove 3 button and set the sampling interval in the pop-up window.

    model cloud generate camera edit 2

    In the figure below, the left image is a point cloud model before downsampling, and the right one is after downsampling with a sampling interval of 3 mm.

    model cloud generate camera edit 3

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.