Mech-DLK SDK C++ API 2.0.2
C++ API reference documentation for secondary development with Mech-DLK
All Classes Namespaces Files Functions Variables Enumerations Enumerator Macros Pages
Public Member Functions | List of all members
mmind::dl::MMindInferEngine Class Reference

Defines the infer engine. More...

#include <MMindInferEngine.h>

Public Member Functions

 MMindInferEngine ()
 Constructs the infer engine.
 
StatusCode setBatchSize (const unsigned int batchSize, const unsigned int moduleIdx=0)
 Sets the batch size of the model package.
 
StatusCode setBatchSize (const std::vector< unsigned int > &batchSize)
 Sets the batch size of the model package.
 
StatusCode setFloatPrecision (const FloatPrecisionType floatPrecisionType, const unsigned int moduleIdx=0)
 Sets the float precision of the model package.
 
StatusCode setFloatPrecision (const std::vector< FloatPrecisionType > &floatPrecisionType)
 Sets the float precision of the model package.
 
StatusCode setDeviceId (const unsigned int deviceId)
 Sets the device ID.
 
StatusCode setInferDeviceType (const InferDeviceType type)
 Sets the infer device type.
 
StatusCode create (const std::string &modelPath)
 Creates an infer engine for model package inference.
 
StatusCode load ()
 Loads the model into memory.
 
StatusCode infer (const std::vector< MMindImage > &images)
 Makes image inference using the model package inference engine.
 
StatusCode getResults (std::vector< MMindResult > &results)
 Gets the model inference result.
 
StatusCode resultVisualization (std::vector< MMindImage > &images)
 Draws all the model results onto the images.
 
StatusCode moduleResultVisualization (std::vector< MMindImage > &images, const unsigned int moduleIdx)
 Draws the model results of the specified index onto the images.
 
std::vector< DeepLearningAlgoTypegetDeepLearningAlgoTypes () const
 Gets the model type list.
 
void release ()
 Releases the memory of the model package inference engine.
 
 MMindInferEngine (const MMindInferEngine &rhs)=delete
 
 MMindInferEngine (MMindInferEngine &&rhs)
 
MMindInferEngineoperator= (const MMindInferEngine &rhs)=delete
 
MMindInferEngineoperator= (MMindInferEngine &&rhs)
 
 ~MMindInferEngine ()
 

Detailed Description

Defines the infer engine.

Constructor & Destructor Documentation

◆ MMindInferEngine() [1/3]

mmind::dl::MMindInferEngine::MMindInferEngine ( )

Constructs the infer engine.

◆ MMindInferEngine() [2/3]

mmind::dl::MMindInferEngine::MMindInferEngine ( const MMindInferEngine rhs)
delete

◆ MMindInferEngine() [3/3]

mmind::dl::MMindInferEngine::MMindInferEngine ( MMindInferEngine &&  rhs)

◆ ~MMindInferEngine()

mmind::dl::MMindInferEngine::~MMindInferEngine ( )

Member Function Documentation

◆ create()

StatusCode mmind::dl::MMindInferEngine::create ( const std::string &  modelPath)

Creates an infer engine for model package inference.

Parameters
[in]modelPathThe path to the model package exported from Mech-DLK.
Returns
See StatusCode for details.

◆ getDeepLearningAlgoTypes()

std::vector< DeepLearningAlgoType > mmind::dl::MMindInferEngine::getDeepLearningAlgoTypes ( ) const

Gets the model type list.

Returns
See DeepLearningAlgoType for details.

◆ getResults()

StatusCode mmind::dl::MMindInferEngine::getResults ( std::vector< MMindResult > &  results)

Gets the model inference result.

Parameters
[in]resultsSee MMindResult for details.
Returns
See StatusCode for details.

◆ infer()

StatusCode mmind::dl::MMindInferEngine::infer ( const std::vector< MMindImage > &  images)

Makes image inference using the model package inference engine.

Parameters
[in]imagesSee MMindImage for details.
Returns
See StatusCode for details.

◆ load()

StatusCode mmind::dl::MMindInferEngine::load ( )

Loads the model into memory.

Returns
See StatusCode for details.
Note
When the type of the infer device is GpuOptimization, it may take 1-5 minutes to optimize the model package.

◆ moduleResultVisualization()

StatusCode mmind::dl::MMindInferEngine::moduleResultVisualization ( std::vector< MMindImage > &  images,
const unsigned int  moduleIdx 
)

Draws the model results of the specified index onto the images.

Parameters
[in]MMindImageSee MMindImage for details.
[in]moduleIdxSpecified module index in the model package.
Returns
See StatusCode for details.

◆ operator=() [1/2]

MMindInferEngine & mmind::dl::MMindInferEngine::operator= ( const MMindInferEngine rhs)
delete

◆ operator=() [2/2]

MMindInferEngine & mmind::dl::MMindInferEngine::operator= ( MMindInferEngine &&  rhs)

◆ release()

void mmind::dl::MMindInferEngine::release ( )

Releases the memory of the model package inference engine.

◆ resultVisualization()

StatusCode mmind::dl::MMindInferEngine::resultVisualization ( std::vector< MMindImage > &  images)

Draws all the model results onto the images.

Parameters
[in]MMindImageSee MMindImage for details.
Returns
See StatusCode for details.

◆ setBatchSize() [1/2]

StatusCode mmind::dl::MMindInferEngine::setBatchSize ( const std::vector< unsigned int > &  batchSize)

Sets the batch size of the model package.

Parameters
[in]batchSizeThe batch size of the model package.
Returns
See StatusCode for details.
Note
This function sets the batch size of all modules once.

◆ setBatchSize() [2/2]

StatusCode mmind::dl::MMindInferEngine::setBatchSize ( const unsigned int  batchSize,
const unsigned int  moduleIdx = 0 
)

Sets the batch size of the model package.

Parameters
[in]batchSizeThe batch size of the model package.
[in]moduleIdxSpecified module index in the model package.
Returns
See StatusCode for details.
Note
This function sets only the batch size corresponding to the module index.

◆ setDeviceId()

StatusCode mmind::dl::MMindInferEngine::setDeviceId ( const unsigned int  deviceId)

Sets the device ID.

Parameters
[in]deviceIdThe index of the specified GPU during model inference.
Returns
See StatusCode for details.
Note
When the InferDeviceType is set to CPU, the setting of deviceId is invalid.

◆ setFloatPrecision() [1/2]

StatusCode mmind::dl::MMindInferEngine::setFloatPrecision ( const FloatPrecisionType  floatPrecisionType,
const unsigned int  moduleIdx = 0 
)

Sets the float precision of the model package.

Parameters
[in]floatPrecisionTypeThe float precision of the model package, See FloatPrecisionType for details.
[in]moduleIdxSpecified module index in the model package.
Returns
See StatusCode for details.
Note
This function sets only the float precision corresponding to the module index.

◆ setFloatPrecision() [2/2]

StatusCode mmind::dl::MMindInferEngine::setFloatPrecision ( const std::vector< FloatPrecisionType > &  floatPrecisionType)

Sets the float precision of the model package.

Parameters
[in]floatPrecisionTypeThe float precision of the model package, See FloatPrecisionType for details.
Returns
See StatusCode for details.
Note
This function sets the float precision of all modules once.

◆ setInferDeviceType()

StatusCode mmind::dl::MMindInferEngine::setInferDeviceType ( const InferDeviceType  type)

Sets the infer device type.

Parameters
[in]typeSee InferDeviceType for details.
Returns
See StatusCode for details.
Note
In CPU mode, the deployment computer must have an INTEL CPU, and in GpuDefault or GpuOptimization mode, the deployment computer must have an NVIDIA GPU.

The documentation for this class was generated from the following file: