Get Started

You are currently viewing the documentation for the latest version (2.1.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

This chapter introduces how to apply Mech-DLK SDK to achieve inference using a defect segmentation model exported from Mech-DLK.

Prerequisites

  • Install Mech-DLK SDK.

  • Download and install the Sentinel LDK encryption driver. After installation, ensure that the encryption driver appears in Settings  Apps  Apps & features on the IPC.

    If you have installed Mech-DLK on your device, you do not need to install the encryption driver again because it is already in place.
  • Obtain and manage the software license.

Inference Flow

inference flow

Function Description

In this section, we take the Defect Segmentation model exported from Mech-DLK as an example to show the functions you need to use when using Mech-DLK SDK for model inference.

Create an Input Image

Call the following function to create an input image.

  • C#

  • C++

  • C

MMindImage image = new MMindImage();
image.CreateFromPath("path/to/image.png");
List<MMindImage> images = new List<MMindImage> { image };
mmind::dl::MMindImage image;
image.createFromPath(“path/to/image.png”);
std::vector<mmind::dl::MMindImage> images = {image};
MMindImage input;
createImage("path/to/image.png", &input);

Create an Inference Engine

Call the following function to create an inference engine.

  • C#

  • C++

  • C

InferEngine inferEngine = new InferEngine();
inferEngine.Create("path/to/xxx.dlkpack", BackendType.GpuDefault, 0);
  • If NVIDIA discrete graphics cards are available on your device, you can set the inference backend, i.e., the second parameter in the function, to GpuDefault or GpuOptimization.

    • When the parameter is set to GpuOptimization, you need to wait for one to five minutes for model optimization. FP16 is valid only under this setting.

  • If NVIDIA discrete graphics cards are unavailable on your device, you can only set the inference backend to CPU.

  • In this function, the third parameter represents the ID of the used NVIDIA graphics cards, which is 0 when there is only one graphics card. When the inference backend is set to CPU, this parameter is invalid.

mmind::dl::MMindInferEngine engine;
engine.create(kPackPath);
// engine.setInferDeviceType(mmind::dl::InferDeviceType::GpuDefault);
// engine.setBatchSize(1);
// engine.setFloatPrecision(mmind::dl::FloatPrecisionType::FP32);
// engine.setDeviceId(0);
engine.load();

In C++ interfaces, the model parameters can be set according to the actual situation:

  • When the setxxx function is not called, by default, BatchSize is set to 1; FloatPrecision is set to FP32; DeviceId is set to 0.

  • If NVIDIA discrete graphics cards are available on your device, InferDeviceType is set to GpuDefault; otherwise, InferDeviceType is set to CPU.

  • If you need to change the parameters of the inference engine, the setxxx function must be placed ahead of the load() function.

  • When InferDeviceType is set to GpuOptimization, you need to wait for one to five minutes for model optimization. FP16 is valid only under this setting.

Engine engine;
createPackInferEngine(&engine, "path/to/xxx.dlkpack", GpuDefault, 0);
  • If NVIDIA discrete graphics cards are available on your device, you can set the inference backend, i.e., the third parameter in the function, to GpuDefault or GpuOptimization.

    • When the inference backend is set to GpuOptimization, you need to wait for one to five minutes for model optimization.

  • If NVIDIA discrete graphics cards are unavailable on your device, you can only set the inference backend to CPU.

  • In this function, the fourth parameter represents the ID of the used NVIDIA graphics cards, which is 0 when there is only one graphics card. When the inference backend is set to CPU, this parameter is invalid.

Deep Learning Engine Inference

Call the function below for deep learning engine inference.

  • C#

  • C++

  • C

inferEngine.Infer(images);
engine.infer(images);
infer(&engine, &input, 1);
In this function, the parameter 1 denotes the number of images for inference, which should equal the number of images in input.

Obtain the Defect Segmentation Result

Call the function below to obtain the defect segmentation result.

  • C#

  • C++

  • C

List<Result> results;
inferEngine.GetResults(out results);
std::vector<mmind::dl::MMindResult> results;
engine.getResults(results);
DefectAndEdgeResult* defectAndEdgeResult = NULL;
unsigned int resultNum = 0;
getDefectSegmentataionResult(&engine, 0, &defectAndEdgeResult, &resultNum);

In this function, the second parameter 0 denotes the model index in the deep learning model inference package.

  • If the inference package is of a single model, the parameter can only be set to 0.

  • If the inference package is of cascaded models, the parameter should be set according to the order of modules in the model inference package.

Visualize Result

Call the function below to visualize the model inference result.

  • C#

  • C++

  • C

inferEngine.ResultVisualization(images);
image.Show("result");
engine.resultVisualization(images);
image.show("Result");
resultVisualization(&engine, &input, 1);
showImage(&input, "result");
In this function, the parameter 1 denotes the number of images for inference, which should equal the number of images in input.

Release Memory

Call the following function(s) to release memory and prevent memory leaks.

  • C#

  • C++

  • C

inferEngine.Release();
engine.release();
releaseDefectSegmentationResult(&defectAndEdgeResult, resultNum);
releaseImage(&input);
releasePackInferEngine(&engine);

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.