ONNX
Open Neural Network Exchange
Last updated
Open Neural Network Exchange
Last updated
Inputs and outputs are created automatically when the model is loaded. Various models can be uploaded and linked with other components. Inference device can be selected CPU or GPU(CUDA) as executor provider.
Load ONNX model from the file system.
ONNX's model input must be an array.
For more information read here