Optimize and load onto compute device
Arm NN supports optimized execution on multiple devices, including CPU and GPU. Before you start executing a graph, you need to select the appropriate device context and optimize the graph for that device.
Making use of an Arm Mali GPU is as simple as specifying Compute::GpuAcc
when creating the context, no other changes are required.
// Create a context and optimize the network for one or more compute devices in order of preference // e.g. GpuAcc, CpuAcc = if available run on Arm Mali GPU, else try to run on Arm v7 or v8 CPU armnn::IRuntime::CreationOptions options; armnn::IRuntimePtr context = armnn::IRuntime::Create(options); armnn::IOptimizedNetworkPtr optNet = armnn::Optimize(*net, {armnn::Compute::GpuAcc, armnn::Compute::CpuAcc}, runtime->GetDeviceSpec()); //Load the optimized network onto the device armnn::NetworkID networkIdentifier; context->LoadNetwork(networkIdentifier, std::move(optNet));
Previous | Next |