Use the Object Detection Module

You are currently viewing the documentation for the latest version (2.6.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

Please click here to download an image dataset of rotors. In this topic, we will use an Object Detection module and train a model to detect the positions of rotors in the image and output the quantity.

You can also use your own data. The usage process is overall the same, but the labeling part is different.

Workflow

  1. Create a new project and add the Object Detection module: Click New Project in the interface, name the project, and select a directory to save the project. Click example projects icon create in the upper right corner of the Modules panel and add the Object Detection module.

    example projects add new module
  2. Import the image data of rotors: Unzip the downloaded data file. Click the Import/Export button in the upper left corner, select Import Folder, and import the image data.

    example projects import images
    When you select Import Dataset, you can import datasets in the DLKDB format (.dlkdb) and the COCO format. Click here to download the example dataset.
  3. Select an ROI: Click the ROI Tool button example projects icon roi and adjust the frame to select the bin containing rotors in the image as an ROI. Then, click the tools introduction OK button in the lower right corner of the ROI to save the settings. Setting the ROI can avoid interferences from the background and reduce processing time.

    example projects roi
  4. Split the dataset into the training set and validation set: By default, 80% of the images in the dataset will be split into the training set, and the rest 20% will be split into the validation set. You can click example projects icon slider and drag the slider to adjust the proportion. Please make sure that both the training set and validation set include objects of all classes to be detected. If the default training set and validation set cannot meet this requirement, please right-click the name of the image and then select Switch to training set or Switch to validation set to adjust the set to which the image belongs.

    example projects divide dataset
  5. Create labels: Create labels based on the type or feature of different objects. In this example, the labels are named after the rotors.

    When you select a label, you can right-click the label and select Merge Into to change the current data type to another type. If you perform the Merge Into operation after you trained the model, it is recommended that you train the model again.
    example projects create label
  6. Label images: Make rectangular selections on the images to label all the rotors. Please select the rotors as precisely as possible and avoid including irrelevant regions. Inaccurate labeling will affect the training result of the model. Click here to view how to use labeling tools.

    example projects labeling
  7. Train the model: Keep the default training parameter settings and click Train to start training the model.

    example projects training
  8. Validate the model: After the training is completed, click Validate to validate the model and check the results.

    After you validate a model, you can import new image data to the current module and use the pre-trained labeling feature to perform auto-labeling based on this model. For more information, see Pre-trained labeling.

    example projects validation
  9. Export the model: Click Export. Then, set Max num of inference objects in the pop-up dialog box, click Export, and select a directory to save the exported model.

    The maximum number of objects during a round of inference, which is 100 by default.
    example projects export model

    The exported model can be used in Mech-Vision and Mech-DLK SDK. Click here to view the details.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.