Mech-DLK SDK C++ API 2.0.2
C++ API reference documentation for secondary development with Mech-DLK
|
Defines the infer engine. More...
#include <MMindInferEngine.h>
Public Member Functions | |
MMindInferEngine () | |
Constructs the infer engine. | |
StatusCode | setBatchSize (const unsigned int batchSize, const unsigned int moduleIdx=0) |
Sets the batch size of the model package. | |
StatusCode | setBatchSize (const std::vector< unsigned int > &batchSize) |
Sets the batch size of the model package. | |
StatusCode | setFloatPrecision (const FloatPrecisionType floatPrecisionType, const unsigned int moduleIdx=0) |
Sets the float precision of the model package. | |
StatusCode | setFloatPrecision (const std::vector< FloatPrecisionType > &floatPrecisionType) |
Sets the float precision of the model package. | |
StatusCode | setDeviceId (const unsigned int deviceId) |
Sets the device ID. | |
StatusCode | setInferDeviceType (const InferDeviceType type) |
Sets the infer device type. | |
StatusCode | create (const std::string &modelPath) |
Creates an infer engine for model package inference. | |
StatusCode | load () |
Loads the model into memory. | |
StatusCode | infer (const std::vector< MMindImage > &images) |
Makes image inference using the model package inference engine. | |
StatusCode | getResults (std::vector< MMindResult > &results) |
Gets the model inference result. | |
StatusCode | resultVisualization (std::vector< MMindImage > &images) |
Draws all the model results onto the images. | |
StatusCode | moduleResultVisualization (std::vector< MMindImage > &images, const unsigned int moduleIdx) |
Draws the model results of the specified index onto the images. | |
std::vector< DeepLearningAlgoType > | getDeepLearningAlgoTypes () const |
Gets the model type list. | |
void | release () |
Releases the memory of the model package inference engine. | |
MMindInferEngine (const MMindInferEngine &rhs)=delete | |
MMindInferEngine (MMindInferEngine &&rhs) | |
MMindInferEngine & | operator= (const MMindInferEngine &rhs)=delete |
MMindInferEngine & | operator= (MMindInferEngine &&rhs) |
~MMindInferEngine () | |
Defines the infer engine.
mmind::dl::MMindInferEngine::MMindInferEngine | ( | ) |
Constructs the infer engine.
|
delete |
mmind::dl::MMindInferEngine::MMindInferEngine | ( | MMindInferEngine && | rhs | ) |
mmind::dl::MMindInferEngine::~MMindInferEngine | ( | ) |
StatusCode mmind::dl::MMindInferEngine::create | ( | const std::string & | modelPath | ) |
Creates an infer engine for model package inference.
[in] | modelPath | The path to the model package exported from Mech-DLK. |
std::vector< DeepLearningAlgoType > mmind::dl::MMindInferEngine::getDeepLearningAlgoTypes | ( | ) | const |
Gets the model type list.
StatusCode mmind::dl::MMindInferEngine::getResults | ( | std::vector< MMindResult > & | results | ) |
Gets the model inference result.
[in] | results | See MMindResult for details. |
StatusCode mmind::dl::MMindInferEngine::infer | ( | const std::vector< MMindImage > & | images | ) |
Makes image inference using the model package inference engine.
[in] | images | See MMindImage for details. |
StatusCode mmind::dl::MMindInferEngine::load | ( | ) |
Loads the model into memory.
StatusCode mmind::dl::MMindInferEngine::moduleResultVisualization | ( | std::vector< MMindImage > & | images, |
const unsigned int | moduleIdx | ||
) |
Draws the model results of the specified index onto the images.
[in] | MMindImage | See MMindImage for details. |
[in] | moduleIdx | Specified module index in the model package. |
|
delete |
MMindInferEngine & mmind::dl::MMindInferEngine::operator= | ( | MMindInferEngine && | rhs | ) |
void mmind::dl::MMindInferEngine::release | ( | ) |
Releases the memory of the model package inference engine.
StatusCode mmind::dl::MMindInferEngine::resultVisualization | ( | std::vector< MMindImage > & | images | ) |
Draws all the model results onto the images.
[in] | MMindImage | See MMindImage for details. |
StatusCode mmind::dl::MMindInferEngine::setBatchSize | ( | const std::vector< unsigned int > & | batchSize | ) |
Sets the batch size of the model package.
[in] | batchSize | The batch size of the model package. |
StatusCode mmind::dl::MMindInferEngine::setBatchSize | ( | const unsigned int | batchSize, |
const unsigned int | moduleIdx = 0 |
||
) |
Sets the batch size of the model package.
[in] | batchSize | The batch size of the model package. |
[in] | moduleIdx | Specified module index in the model package. |
StatusCode mmind::dl::MMindInferEngine::setDeviceId | ( | const unsigned int | deviceId | ) |
Sets the device ID.
[in] | deviceId | The index of the specified GPU during model inference. |
deviceId
is invalid. StatusCode mmind::dl::MMindInferEngine::setFloatPrecision | ( | const FloatPrecisionType | floatPrecisionType, |
const unsigned int | moduleIdx = 0 |
||
) |
Sets the float precision of the model package.
[in] | floatPrecisionType | The float precision of the model package, See FloatPrecisionType for details. |
[in] | moduleIdx | Specified module index in the model package. |
StatusCode mmind::dl::MMindInferEngine::setFloatPrecision | ( | const std::vector< FloatPrecisionType > & | floatPrecisionType | ) |
Sets the float precision of the model package.
[in] | floatPrecisionType | The float precision of the model package, See FloatPrecisionType for details. |
StatusCode mmind::dl::MMindInferEngine::setInferDeviceType | ( | const InferDeviceType | type | ) |
Sets the infer device type.
[in] | type | See InferDeviceType for details. |