Loading the model into the Arm NN SDK runtime

If everything has been done correctly, your network is now ready for use with the Arm NN SDK. For a detailed example of how to load your model into the Arm NN SDK runtime, see Deploying a Caffe MNIST model using the Arm NN SDK or Deploying a TensorFlow MNIST model on Arm NN. This documentation includes example code.  

Briefly, to load your model into the Arm NN runtime, you must complete the following steps:

  1. Link to the appropriate parser for your framework of choice and the core Arm NN runtime.
  2. Instantiate the parser.
  3. Load the model file using the parser.
  4. Create a RunTime object using the lRuntime::Create() function.
  5. Pass the DeviceSpec object and the input network object to the Optimize() function. The parser produces the input network object and you can query the DeviceSpec object using the Runtime::GetDeviceSpec() function.
  6. Load the IOptimizedNetwork object that the Optimize() function produces into the runtime using the IRuntime::LoadNetwork() function. 
Previous Next