Open Neural Network Exchange

Inputs and outputs are created automatically when the model is loaded. Various models can be uploaded and linked with other components. Inference device can be selected CPU or GPU(CUDA) as executor provider.

Load ONNX model from the file system.

ONNX's model input must be an array.

System Ports

For more information read here

Last updated