Supplementary information: model training

Prepare to build TensorFlow

The code that we used to train our voice model currently depends on some experimental operations that are only available when building TensorFlow from source. We will have to build this.

The easiest way to build TensorFlow from source is to use Docker. Docker is a tool that enables you to run tasks on a virtual machine that is isolated from the rest of your computer, which makes dependency management easier. TensorFlow provides a custom docker image that can be used to build the toolchain from source.

The first step is to follow the instructions to install Docker.

Once Docker is installed, run the following command to test that it works:

docker run hello-world

You should see a message starting with “Hello from Docker!”.

Once you have installed Docker, use the following command to install the latest TensorFlow development Docker image. This contains the TensorFlow source:

docker pull tensorflow/tensorflow:devel

Visit TensorFlow’s Docker images for more information.

Now, run the following command to connect to your Docker instance and open a shell:

docker run -it -w /tensorflow_src -v $PWD:/mnt tensorflow/tensorflow:devel bash

You should now be on the command line of the TensorFlow Docker image, in the directory that contains the TensorFlow source code. You should issue the following commands to fetch the very latest code and install some required Python dependencies:

git fetch
git rebase origin master
pip install -U --user pip six numpy wheel setuptools mock tensorflow_estimator
pip install -U --user keras_applications==1.0.6 --no-deps
pip install -U --user keras_preprocessing==1.0.5 --no-deps

We now need to configure the build. Running the following command from the root of the TensorFlow repo will start configuration. You will be asked a series of questions. Just hit return at every prompt to accept the default option.

./configure

Once configuration is done, we are ready to go.

Train the model

The following command will build TensorFlow from source and then start training.

Note: The build will take several hours. To save time you can download the tiny_conv.pb and skip to the following section on this page: Converting the trained model section below.

bazel run -c opt --copt=-mavx2 --copt=-mfma tensorflow/examples/speech_commands:train -- --model_architecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="up,down" --silence_percentage=25 --unknown_percentage=25 --quantize=1

Notice how the wanted_words argument contains the words “up” and “down”. You can add any words that you like from the available ten to this field, separated by commas.

On older CPUs, you can leave out the --copt arguments. These arguments are there to accelerate training on chips that support the extensions.

The process will take a couple of hours. While you wait, you can take a look at a more detailed overview of the speech model that we are training.

Freeze the model

We need to perform a few extra steps to be able to run the model directly on our microcontroller. With your trained model, you should run the following command to create a single “frozen graph” file that represents the trained model.

Note: we need to provide our wanted_words argument again:

bazel run tensorflow/examples/speech_commands:freeze -- --model_qqqcarchitecture=tiny_conv --window_stride=20 --preprocess=micro --wanted_words="up,down" --quantize=1 --output_file=/tmp/tiny_conv.pb --start_checkpoint=/tmp/speech_commands_train/tiny_conv.ckpt-18000

You now have a file, /tmp/tiny_conv.pb, that represents the model. This is great, but since we are deploying the model on a tiny device, we need to do everything that we can to make it as small and simple as possible.

Convert the model to the TensorFlow Lite format 

To obtain a converted model that can run on the microcontroller itself, we need to run a conversion script, TensorFlow Lite converter. This tool uses clever tricks to make our model as small and efficient as possible and to convert it to a TensorFlow Lite FlatBuffer. To reduce the size of the model we used a technique called quantization. All weights and activations in the model get converted from 32-bit floating point format to an 8-bit and fixed-point format. This will not only reduce the size of the network, but also avoid floating-point computations, that are more computationally expensive. 

Run the following command to perform the conversion:

bazel run tensorflow/lite/toco:toco -- --input_file=/tmp/tiny_conv.pb --output_file=/tmp/tiny_conv.tflite --input_shapes=1,49,40,1 --input_arrays=Reshape_1 --output_arrays='labels_softmax' --inference_type=QUANTIZED_UINT8 --mean_values=0 --std_values=9.8077

Instead you should now have a /tmp/tiny_conv.tflite file. We now have to copy this file from our Docker instance to the host machine. To do this, run the following command:

cp /tmp/tiny_conv.tflite /mnt

This will place the file in the directory that you were in when you first ran the command to connect to Docker. For example, if you ran the command from ~/Desktop, the file will be at ~/Desktop/tiny_conv.tflite.

To leave the Docker instance and get back to your regular command line, type the following:

exit
Previous