Run graph on device

Running inference on the compute device is performed through the context's EnqueueWorkload function:

Code to run a single inference on test image

Here the input and output tensors are bound to data and the loaded network identifier is selected. The result of the inference can be read directly from the output array and compared to the MnistImage label we read from the data file:

Convert 1-hot output to an integer label and print

In this case, the std::distance function is used to find the index of the largest element in the output - the equivalent to NumPy's argmax function.

Previous Next