Loading the model into the Arm NN SDK runtime
If everything has been done correctly, your network is now ready for use with the Arm NN SDK. For a detailed example of how to load your model into the Arm NN SDK runtime, see Deploying a Caffe MNIST model using the Arm NN SDK or Deploying a TensorFlow MNIST model on Arm NN. This documentation includes example code.
Briefly, to load your model into the Arm NN runtime, you must complete the following steps:
- Link to the appropriate parser for your framework of choice and the core Arm NN runtime.
- Instantiate the parser.
- Load the model file using the parser.
- Create a
RunTimeobject using the
- Pass the
DeviceSpecobject and the input network object to the
Optimize()function. The parser produces the input network object and you can query the
DeviceSpecobject using the
- Load the
IOptimizedNetworkobject that the
Optimize()function produces into the runtime using the