Loading the model into the Arm NN SDK runtime
If everything has been done correctly, your network is now ready for use with the Arm NN SDK. For a detailed example of how to load your model into the Arm NN SDK runtime, see Deploying a Caffe MNIST model using the Arm NN SDK or Deploying a TensorFlow MNIST model on Arm NN. This documentation includes example code.
Briefly, to load your model into the Arm NN runtime, you must complete the following steps:
- Link to the appropriate parser for your framework of choice and the core Arm NN runtime.
- Instantiate the parser.
- Load the model file using the parser.
- Create a
RunTime
object using thelRuntime::Create()
function. - Pass the
DeviceSpec
object and the input network object to theOptimize()
function. The parser produces the input network object and you can query theDeviceSpec
object using theRuntime::GetDeviceSpec()
function. - Load the
IOptimizedNetwork
object that theOptimize()
function produces into the runtime using theIRuntime::LoadNetwork()
function.