Optimize and load onto a compute device

Arm NN supports optimized execution on multiple devices, including CPU and GPU. Before you start executing a graph, you must select the appropriate device context and optimize the graph for that device.

Using an Arm Mali GPU is as simple as specifying Compute::GpuAcc when creating the context. No other changes are required.

For a Raspian base installation, the only dependency that you must add is TensorFlow from Google’s binaries. Install some TensorFlow prerequisites by entering the following code in the command line:

// Create a context and optimize the network for one or more compute devices in order of preference
// e.g. GpuAcc, CpuAcc = if available run on Arm Mali GPU, else try to run on Arm v7 or v8 CPU 
armnn::IRuntime::CreationOptions options;
armnn::IRuntimePtr context = armnn::IRuntime::Create(options);
armnn::IOptimizedNetworkPtr optNet = armnn::Optimize(*net, {armnn::Compute::GpuAcc, armnn::Compute::CpuAcc}, runtime->GetDeviceSpec());
//Load the optimized network onto the device
armnn::NetworkID networkIdentifier;
context->LoadNetwork(networkIdentifier, std::move(optNet));
Previous Next