Manage Deep Learning Model Packages in Mech-MSR

You are currently viewing the documentation for the latest version (2.2.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

In Mech-MSR, you can use the deep learning model package management tool to import model packages.

Introduction

Deep learning model package management tool is designed to manage all deep learning model packages in Mech-MSR. It can be used to optimize single model packages exported from Mech-DLK 2.6.1 or above and manage and monitor the operation mode, hardware type, model efficiency, and model package status. Besides, this tool can be used to monitor the GPU memory usage of the IPC.

If a Deep Learning Model Package Inference Step is used in the project, you can import the model packages to the deep learning model package management tool first and then use the models in the Step. By importing the model packages into the tool in advance, you can optimize the model package beforehand.

  • You need a valid Mech-DLK software license to use the imported model package(s) for deep learning inference. If you do not have a valid deep learning license, please contact Mech-Mind sales for Pro-Run version, Pro-Train version or single model package. If you do have a deep learning license,verify that it is valid, matches the model package to be used, and is recognized by the current IPC. If not, please contact Mech-Mind sales.

  • Ensure that the minimum required version for the GPU driver is 526.98, and the minimum required CPU version is 6th-generation Intel Core. If the hardware cannot meet the requirement, the deep learning model package cannot be imported successfully.

Since Mech-DLK 3.0.0, model packages are available in two types: single model packages and multiple model packages. Versions from Mech-DLK 2.4.1 and Mech-DLK 3.0.0 support exporting only single-model packages and cascaded model packages (that is, serial models).

  • Single model package: A model package that contains one and only one model for a deep learning algorithm module, such as an Instance Segmentation model.

  • Multiple model packages: A model package that contains models for multiple deep learning algorithm modules, which can be combined in serial, parallel, or serial-parallel configurations.

    As shown in the figure, the multi-model package contains one image classification module and several defect segmentation modules. The image classification module is connected in series with multiple defect segmentation modules, while the defect segmentation modules are connected in parallel with one another.

    ][align="center"

Start the Feature

You can open the tool in the following ways:

  • After creating or opening a project, select Deep Learning  Deep Learning Model Package Management Tool in the menu bar.

  • In the graphical programming workspace of the software, click the Config wizard button on the Deep learning model package inferenceStep.

  • In the graphical programming workspace of the software, select the Deep learning model package inference Step and then click Open the editor button under the Model manager tool parameter in the Parameters section.

deep learning model management

Interface Description

The fields in this tool are described as follows:

Field Description

Available model package

The name of the imported model package.

Project name

The Mech-MSR project that uses the corresponding model package.

Model package type

Model package types include single-model packages (such as object detection and text recognition) and multi-model packages.

Operation mode

The operation mode of the model package during inference, including Sharing mode and Performance mode.

  • Sharing mode: When this option is selected, inference for multiple Steps using the same model package will be queued, saving more runtime resources.

  • Performance mode: When this option is selected, inference for multiple Steps using the same model package will run in parallel for faster inference speed but will consume more runtime resources.

Hardware type

The hardware type used for model package inference, including GPU (default), GPU (optimization), and CPU.

  • CPU: Use CPU for deep learning model inference, which will increase the inference time and reduce the recognition accuracy compared with GPU.

  • GPU (default): Once this option is selected, the model package will not be optimized according to the hardware type and the deep learning inference will not be speeded up.

  • GPU (optimization): Once this option is selected, the model package will be optimized according to the hardware type and the one-time optimization process takes about 5 to 15 minutes. When an optimized model package is used, the inference time will be reduced.

The tool determines the Hardware type option by detecting the IPC hardware type. The display rules for Hardware type option are as follows.

  • CPU: This option is shown when a computer with an Intel CPU is detected.

  • GPU (default), GPU (optimization): These options are shown when a computer with an NVIDIA discrete graphics card is detected, and the graphics card driver version is 526.98 or higher.

Model efficiency

The inference efficiency of the model package.

Model package status

The statuses of the model package, including loading and optimizing, optimization failure, not loaded, and loading completed.

  • Loading and optimizing: The model package is under optimization.

  • Optimization failure: The model package optimization failed.

  • Not loaded: The model package has not been used by any deep learning model package inference Steps.

  • Loading completed: The model package is already used by the deep learning model package inference Step.

Operation

You can release or delete a model package.

  • Release: After clicking the Release button, the model package status changes from Loading completed to Unloaded, but the model package remains in the parameters of the Deep Learning Model Package Inference Step. Re-running the Step restores it to Loading completed.

  • Delete: Once the Delete button is clicked, the model package will be removed from the current solution. After deletion, Steps that depend on the model package may fail to run.

The model package cannot be released or deleted while it is being optimized. The software cannot be closed at this time either. Restart the operation after the optimization is completed.

Common Operations

Follow the steps below to learn about common operations for using the deep learning model package management tool.

Import a Deep Learning Model Package

  1. Open the deep learning model package management tool and click the Import button in the upper right corner.

  2. In the pop-up window, select the model package you want to import, and click the Open button. The model package will appear in the list.

Switch the Operation Mode

If you want to switch the Operation mode for deep learning model package inference, you can click deep learning model management icon 1 in the Operation mode column in the deep learning model package management tool, and select Sharing mode or Performance mode.

deep learning model management select operating mode
  • When the deep learning model package is Optimizing or In use(i.e., being used by a running project), the operation mode cannot be changed.

  • When the operation mode of the deep learning model package is Sharing mode, the GPU ID in the Parameters section of the Deep Learning Model Package Inference Step cannot be changed.

Switch the Hardware Type

You can change the hardware type for deep learning model package inference to GPU (default), GPU (optimization), or CPU.

Click the deep learning model management icon 1 button in the Hardware type column in the deep learning model package management tool, and select GPU (default), GPU (optimization), or CPU.

deep learning model management select hardware type
  • When the deep learning model package is Optimizing or In use (i.e., the project using the model package is running), the model package cannot be switched.

  • When the model package contains fast positioning, GPU (optimization) is not supported.

  • When the universal model package for text detection or text recognition is used, GPU (default) is not supported.

Configure the Model Efficiency

The process of configuring model efficiency is as follows:

  1. Determine the deep learning model package to be configured.

  2. Click the corresponding Configure button under Model efficiency and set the Batch size and Precision in the pop-up window. The model execution efficiency is affected by batch size and precision.

    Batch size: the number of images that will be passed through the neural network at once during inference, ranging from 1 to 128. Increasing the value will increase the model’s inference speed, but more video memory will be used. If the value is not set properly, the inference speed will be slowed down.

  • It is recommended to set the “Batch size” the same as the actual number of images that are passed through the neural network.

  • The instance segmentation models do not support configuring “Batch size”, and the default “Batch size” must be set to 1.

  • Precision (only available when the Hardware type is set to GPU (optimization)):

    • FP32: high-precision model with slow inference.

    • FP16: low-precision model with fast inference.

Troubleshooting

Failed to Import a Deep Learning Model Package

Symptom

After a deep learning model package to import was selected, the system shows the error message of “Failed to import the deep learning model package”.

Possible cause

  • If the model package is downloaded from the Download Center, the package may be corrupted during downloading.

  • The model package may be damaged or edited.

  • The versions of Mech-MSR and Mech-DLK may be incompatible.

  • IPC hardware may not meet the requirements, such as insufficient memory or hard drive space.

Solutions

  • If the model package is downloaded from the Download Center, use the CRC-32 value to verify the integrity of the package. If the CRC-32 value does not match, download the model package again.

  • Check if the model package is damaged or edited. If so, export the model package from Mech-DLK again.

  • Ensure that the versions of Mech-MSR and Mech-DLK are compatible. For more information about version compatibility, see Deep Learning Compatibility.

  • Check the IPC hardware to ensure there is sufficient memory and hard drive space.

  • If the issue still exists, contact Technical Support.

Failed to Optimize a Deep Learning Model Package

Symptom

When optimizing a deep learning model package, an error message saying “Model package optimization failed” popped up.

Possible cause

Insufficient GPU memory.

Solutions

  • Remove the unused model packages in the tool and then re-import the model package for optimization.

  • Switch the “Operation mode” of other model packages to “Sharing mode” and then import the model package for optimization again.

Is this page helpful?

You can give a feedback in any of the following ways:

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.