Before you begin

Episode 1 introduced gesture recognition using a Raspberry Pi explained some of the fundamental concepts of machine learning. However, you can continue following this guide if you want to skip Episode 1.

This guide assumes some familiarity with Keras and neural network training. This Keras getting started guide gives an overview of the Keras Sequential model.

Install TensorFlow 

For a Raspian base install the only dependency that you need to add is TensorFlow from Google’s binaries. First, install some TensorFlow prerequisites by entering the following in the command line:

sudo apt-get install libblas-dev liblapack-dev python-dev libatlas-base-dev gfortran python-setuptools python-h5py  

The exact URL of the current TensorFlow build varies between versions. Go to the TensorFlow Raspberry Pi 3 build page to find the current link:

sudo pip2 install <insert .whl link from the build page here>

Install Arm's training scripts

Download or clone our ML examples repository from GitHub by entering the following on the command line:

git clone https://github.com/ARM-software/ML-examples.git
cd ML-examples/multi-gesture-recognition

These scripts are designed to be easy to understand and modify. Feel free to explore and hack them with your own changes.

  • Advanced information - descriptions of the scripts

    The python source code is designed to be straightforward to follow:

    • preview.py shows the current image from the camera to check it is pointing in the right direction.
    • record.py captures images from the camera and saves them to disk.
    • classify.py gives provides a basic GUI for labelling recorded images.
    • merge.py combines classified images from multiple directories into one directory.
    • validate_split.py seperates 10% of the images in a directory for use as a validation set.
    • train.py loads trains a convolutional neural network to predict classes from images.
    • test.py reports the accuracy of a trained classifier on a particular directory of images.
    • run.py captures images from the camera, classifies them and prints the probabilities of each class.
    • story.py is the script used to drive the live demo, turning lights on and off, playing music and executing remote commands. You will need to modify this to make it work in your environment.
    • camera.py initializes the picamera module and optionally fluctuates the exposure and white balance during recording.

Previous Next