Moving the deep neural network from the cloud to mobile devices promises lower bandwidth and reduced latency, suggesting many exciting new use cases for this technology. The network was trained using the open source MNIST hand written numerals dataset. Both the training and the detection phases were performed on an Arm Mali GPU using OpenCL for GPU Compute. The use of GPU Compute allows the algorithms to run using GPU together with the CPU.
The demo runs on a Samsung Chromebook 2 powered by the Samsung Exynos 5422 processor, with Cortex-A15 & A7 CPUs in a big.LITTLE configuration and a Mali-T628 MP6 GPU.