Model Validation

After model training, you can configure validation parameters, validate and view the recognition results of models on the Validation parameter bar. In addition, you can set the confidence in the Object Detection and Instance Segmentation modules to filter results.

Validation parameters

Click Validation parameter settings to open the window for validation parameter settings.

  • Hardware type

    • CPU: Use CPU for deep learning model inference, which will increase inference time and reduce recognition accuracy compared with GPU.

    • GPU (default): Do model inference without optimizing according to the hardware, and the model inference will not be accelerated.

    • GPU (optimization): Do model inference after optimizing according to the hardware. The optimization only needs to be done once and is expected to take 5–15 minutes. The inference time will be reduced after optimization.

  • GPU ID

    The graphics card information of the device deployed by the user. If multiple GPUs are available on the model deployment device, the model can be deployed on a specified GPU.

  • Float precision

    • FP32: high model accuracy, low inference speed.

    • FP16: low model accuracy, high inference speed.

  • Max num of inference objects (only visible in the Instance Segmentation module and Object Detection module)

    The maximum number of inference objects during a round of inference is 100 by default.

  • Character limit (only visible in the Text Recognition module)

    The maximum quantity of characters that can be recognized from an image, which is 50 by default.

After parameter setting, click OK  Validate and wait for the validation to complete.

We Value Your Privacy

We use cookies to provide you with the best possible experience on our website. By continuing to use the site, you acknowledge that you agree to the use of cookies. If you decline, a single cookie will be used to ensure you're not tracked or remembered when you visit this website.