Release Notes

You are currently viewing the documentation for the latest version (2.6.0). To access a different version, click the "Switch version" button located in the upper-right corner of the page.

■ If you are not sure which version of the product you are currently using, please feel free to contact Mech-Mind Technical Support.

Mech-DLK 2.6.0 Release Notes

This section introduces the new features and improvements of Mech-DLK 2.6.0.

New Features

Added the Parameter Settings Feature for the Smart Labeling Suite

Mech-DLK provides the smart labeling suite for you to efficiently label datasets.

The smart labeling suite includes the following labeling tools:

  • Smart Labeling Tool

  • Pre-trained Labeling Tool (formerly “Pre-labeling Tool”)

  • Visual Foundation Model (VFM) Labeling Tool (formerly “Super-labeling Tool”)

Mech-DLK 2.6.0 has added the feature that allows you to configure parameters for the smart labeling suite. When you use the smart labeling suite, you can configure the following parameters:

  • In the menu bar, click Settings  Settings, and enable Always load smart labeling model. This reduces the frequency of model unloading in open projects, minimizing the number of times you need to wait for the labeling model to reload.

  • In the Instance Segmentation and Object Detection modules, when you are using the smart labeling suite, you can click the Settings button in the upper left corner of the selection region to configure labeling parameters. For more information about parameter description, see Introduction to Instance Segmentation Labeling Tools and Introduction to Object Detection Labeling Tools.

Improvements

Optimized the Training Parameters for Defect Segmentation

Mech-DLK 2.6.0 has optimized the training parameters for the Defect Segmentation module. You can now train a High-accuracy or High-speed model.

Optimized the Algorithm for Fast Positioning

  • Optimized training process: The Set Template button and Quick Template Tool are removed. The image adjustment operation is now performed during model validation, and it affects only the validation results. For more information, see Use the Fast Positioning Module.

  • New labeling tools: Polygon Tool, Ellipse Tool, Rectangle Tool, Smart Labeling Tool, Mask Polygon Tool, Mask Brush Tool, Mask Lasso Tool, Mask Eraser Tool, and ROI Tool.

Optimized the Algorithm for Unsupervised Segmentation

Compared to the original algorithm, the optimized Unsupervised Segmentation algorithm increases training speed by 7 times, increases GPU inference speed by 2 times, and increases CPU inference speed by 10 times.

Supported VFM Labeling Tool for Text Detection

Mech-DLK 2.6.0 has supported the use of the VFM Labeling Tool in the Text Detection module. You can use this tool to label text areas in batches.

Optimized the Export Options for Cascaded Models

In Mech-DLK 2.6.0, you can export a single model or all models from the cascaded models.

Optimized the Add Module Window

Mech-DLK 2.6.0 has optimized the Add Module window, dividing the algorithm modules into Categorization, Locating, Inspection, and OCR algorithms. Additionally, functional descriptions for each module have been added to help you select the module that best fits your business needs.

Adjusted Keyboard Shortcuts

Mech-DLK 2.6.0 has adjusted some keyboard shortcuts within the software to improve efficiency. You can view the shortcuts supported for the current module in the floating window located in the lower-right corner of the selection region.

Removed the Comment Area

Mech-DLK 2.6.0 has removed the Comment area in the lower-right corner of the software interface. You can use the Image Tag feature to add additional information to images.

Release Notes of Previous Versions

Click here to view Mech-DLK 2.5.x release notes
Click here to view Mech-DLK 2.4.x release notes

Mech-DLK 2.4.2 Release Notes

Added region-specific license control. Click Help  About to view the details.

Mech-DLK 2.4.1 Release Notes

New Features

  • Added the Cascade Mode

    Mech-DLK 2.4.1 has added a brand-new cascade mode of modules, which enables the combination of modules to solve deep learning problems in complex scenarios. It should be noted that when the Fast Positioning module is involved, it must be the first module. For example, when you need to detect the positions of defects and classify these defects, you can first add a Defect Segmentation module and then add a Classification module. In addition, when importing data from the previous module, you can select images according to actual needs and configure the import of these images in the Import window.

  • Added the Training Center

    The Training Center supports the training of models in a queue, which is suitable for scenarios requiring the training of multiple models. With the Training Center, the software can train models in sequence, with no need for manual clicking of Train repeatedly, which can save a huge amount of time.

  • Added the Mask Type of Mask Globally and Supported Custom Mask Fill

    In the Defect Segmentation module, when you select the Mask Polygon Tool, the mask type can be Mask single image or Mask globally. In addition, you can customize the mask color.

    • Mask single image: The mask is only displayed in the current image. It is only valid in training.

    • Mask globally: After a mask is drawn in the current image, the mask will be displayed in all images. The mask is valid in both training and validation.

  • Added the Floating Window of Keyboard Shortcuts

    Click keyboard shortcut keyboard in the lower-right corner of the selection region to open the window of keyboard shortcuts.

  • Added Auxiliary Labeling Lines for Rectangle Tool

    In the Instance Segmentation and Object Detection modules, auxiliary labeling lines are added for the Rectangle Tool to assist rectangular selection on images.

  • Added the Display and Filtering of Validation Result Confidence

    In the Instance Segmentation and Object Detection modules, a confidence filtering function is added for validation results. You can adjust the confidence to filter the validation results and then evaluate the accuracy of models.

Improvements

  • Optimized the Classification Module

    The Classification module is optimized, which leads to faster training convergence and a growth rate of 20% in accuracy under complex scenarios.

  • Optimized Mech-DLK SDK

    Mech-DLK SDK is more stable and easier to use after being restructured. Mech-DLK SDK supports the inference based on the cascaded modules and the switching between different operating hardware. In addition, it provides richer samples for reference.

  • Optimized the Setting of Defect Determination Rule

    The defect determination rule in the Defect Segmentation module is optimized. Click here to view the details.

  • Enabled the Setting of Translation in the Fast Positioning Module

    In the Image Adjustment window of the Fast Positioning module, you can translate the image along the X and Y axes. After training, images with objects in specified positions and orientations will be generated, which meet the requirements of more application scenarios.

  • Optimized the Template Tool

    In the Instance Segmentation and Object Detection modules, after selecting the Template Tool, press and hold the Shift key and scroll the mouse wheel to adjust the angle of the template. You can also set the Rotation angle to achieve the same purpose.

Click here to view the Mech-DLK 2.3.0 release notes

Mech-DLK 2.3.0 Release Notes

  • Graphics Card Driver Requirement

    Before using Mech-DLK 2.3.0, please upgrade the graphics card driver to 472.50 or above.

  • Improved the Training Speed

    Optimized the algorithms, and thus significantly improved the speed of model training. Only the optimal model is saved during training, and the training cannot be stopped halfway.

  • Added the Smart Labeling Tool

    For modules including Defect Segmentation, Instance Segmentation, and Object Detection, you can do smart labeling by selecting the Smart Labeling Tool, clicking the objects to be labeled, right-clicking to undo the redundant selection, and pressing the Enter key to complete the labeling.

  • Added the Function of Adding/Removing Vertices for the Polygon Tool

    For the Instance Segmentation and Object Detection modules, after labeling with the Polygon Tool, if the selection needs to be modified, you can left-click the line segment between two vertices to add a vertex, or right-click a vertex to remove it.

  • Added the Template Tool

    For the Instance Segmentation and Object Detection modules, you can use the Template Tool to set the selection as a template. The template can be applied by simply clicking the images. It is suitable for scenarios where there are multiple neatly arranged objects of the same type in an image, and it improves labeling efficiency.

  • Added the Function of Preview by Zooming

    Support previewing full images and cropped cell images.

  • Optimized the Grid Cutting Tool

    Optimized the Grid Cutting Tool. After cutting the image by the grid, you can select a cell image by checking the box in the upper left corner of the cell image, and you can preview the image by clicking on the button in the upper right corner of the cell.

  • Optimized the Data Filtering Mechanism

    Added options for filtering results: “Correct results”, “Wrong results”, “False negative”, and “False positive”. Added options for filtering data types: “Labeled as OK” and “Labeled as NG”.

  • Built-in Deep Learning Environment

    The deep learning environment is built into the software Mech-DLK, and the models can be trained without a separately installed environment.

Click here to view Mech-DLK 2.2.1 release notes

Mech-DLK 2.2.1 Release Notes

  • Added the Function of Showing the Class Activation Maps for Module Classification

    After the model is trained, click Generate CAM. The class activation maps show the weights of the features in the form of heat maps; the model classifies an image into its class according to these features. Image regions with warmer colors have higher weights for classifying the image into its class.

  • Supported Validation and Export of CPU Models

    • Classification and Object Detection: After training is completed, select the deployment device as CPU or GPU before exporting the model.

    • Instance Segmentation: Before training the model, set the training parameters. When exporting a model, select the deployment device as CPU/GPU:

      • CPU lightweight model: Before training the model, set the training parameter Model type to Lite (better with CPU deployment). When exporting the model for deployment, set Deployment device to CPU or GPU.

      • GPU standard model: Before training the model, set the training parameter Model type to Normal (better with GPU deployment). When exporting the model for deployment, set Deployment device to GPU.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.