Getting Started

You are viewing an old version of the documentation. You can switch to the documentation of the latest version by clicking the top-right corner of the page.

This chapter introduces how to apply Mech-DLK SDK to achieve inference of the Defect Segmentation model exported from Mech-DLK.

Prerequisites

Inference Flow

inference flow

Function Description

In this section, we take the Defect Segmentation model exported from Mech-DLK as an example to show the functions you need to use when using Mech-DLK SDK for model inference.

Create an Input Image

MMindImage input;
createImage("path/to/image.png", &input);

Call the function createImage to create an input image.

Create an Inference Engine

Engine engine;
createPackInferEngine(&engine, "path/to/xxx.dlkpack", GpuDefault, 0);

Call the function createPackInferEngine to create an inference engine.

  • If NVIDIA discrete graphics cards are available on your device, you can set the inference backend, i.e., the third parameter in the function, to GpuDefault or GpuOptimization.

    • When the inference backend is set to GpuOptimization, you need to wait for 1 to 5 minutes for model optimization.

  • If NVIDIA discrete graphics cards are unavailable on your device, you can only set the inference backend to CPU.

  • In this function, the fourth parameter represents the ID of the used NVIDIA graphics cards, which is 0 when there is only one graphics card. When the inference backend is set to CPU, this parameter is invalid.

Deep Learning Engine Inference

infer(&engine, &input, 1);

Call the function infer for deep learning engine inference.

In this function, the parameter 1 denotes the number of images for inference, which should equal the number of images in input.

Obtain the Defect Segmentation Result

DefectAndEdgeResult* defectAndEdgeResult = NULL;
unsigned int resultNum = 0;
getDefectSegmentataionResult(&engine, 0, &defectAndEdgeResult, &resultNum);

Call the function getDefectSegmentataionResult to obtain the result of the defect segmentation model.

In this function, the second parameter 0 denotes the model index in the deep learning model inference package.

  • If the inference package is of a single model, the parameter can only be set to 0.

  • If the inference package is of cascaded models, the parameter should be set according to the order of modules in the model inference package.

Result Visualization

resultVisualization(&engine, &input, 1);
showImage(&input, "result");

Call the function resultVisualization to plot the model inference result on the image(s) in input.

In this function, the parameter 1 denotes the number of images for inference, which should equal the number of images in input.

Release Memory

releaseDefectSegmentationResult(&defectAndEdgeResult, resultNum);
releaseImage(&input);
releasePackInferEngine(&engine);

Release the memory of model inference results, input image(s), and the model engine to prevent memory leaks.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.