Run the object detection demo

This section of the guide shows how to install, configure, and run the components that are required by the object detection demonstration.

Note: Please ensure you have completed the instructions in Before you begin in advance of following the steps in this section of the guide.

The following steps show how to run the object detection demo:

  1. Download the Arm ML examples on the HiKey 960.

    cd $HOME/armnn-devenv
    git clone https://github.com/ARM-software/ML-examples.git 
  2. Make space on the HiKey960 root partition.

    The instructions in this guide install several packages. We must ensure there is enough space on the HiKey device root partition for these packages. To create space, move some of the larger system files from the root partition to the user partition.

    Run the make_space.sh script to move /var/cache and /usr/lib/aarch64-linux-gnu from the root partition to the user partition:

    cd $HOME/armnn-devenv/ML-examples/autoware-vision-detector
    ./scripts/make_space.sh 
  3. Install the Robot Operating System (ROS) on the Hikey960.

    Autoware.AI is based on the Robot Operating System, a flexible framework for writing robot software. Before we build Autoware.AI, we must install ROS Kinetic.

    The autoware-vision-detector example in the Arm ML examples repository includes a helper script to install ROS. Run the helper script as follows:

    cd $HOME/armnn-devenv/ML-examples/autoware-vision-detector
    ./scripts/ros_setup.sh

    The ros_setup.sh script does the following:

    • Downloads the Autoware.AI source code
    • Removes unrequired Autoware.AI modules to simplify the source and dependency tree
    • Installs all Autoware dependencies
    • Copies Arm NN include files and libraries into /opt/armnn. The object detection module expects these files to be in this location.


    Note:
    The ROS.org installation instructions include more information about how to install ROS Kinetic on Ubuntu.

  4. Build Autoware.AI on the HiKey 960.

    The autoware-vision-detector example in the Arm ML examples repository includes a helper script to install both ROS and Autoware.AI.

    Run the helper script as follows:

    cd $HOME/armnn-devenv
    ./ML-examples/autoware-vision-detector/scripts/autoware_setup.sh
    mkdir $HOME/armnn-devenv/autoware/src/arm/vision_detector
    cd $HOME/armnn-devenv/ML-examples/autoware-vision-detector
    cp -r * $HOME/armnn-devenv/autoware/src/arm/vision_detector/

    Note: The Autoware.AI source build instructions include more information about how to install Autoware.AI.

  5. Prepare the Tiny YOLO v2 neural network on the host machine.

    This guide uses the Tiny YOLO v2 neural network.

    You Only Look Once (YOLO) is a network for objection detection. Object detection identifies the presence and location of certain objects in an image and classifies those objects. Tiny YOLO is a variation of YOLO which offers a smaller model size and faster inference speed. Tiny YOLO is naturally suited for embedded computer vision and deep learning devices.  

    Before Arm NN can parse the network, we must convert the network from its original darknet format to the TensorFlow protobuf format. To conserve disk space on the HiKey device, we perform this conversion on the host machine.

    The autoware-vision-detector example in the Arm ML examples repository includes a helper script to download and convert the Tiny YOLO v2 neural network. Run the helper script as follows:

    cd $HOME
    git clone https://github.com/ARM-software/ML-examples.git
    cd $HOME/ML-examples/autoware-vision-detector
    ./scripts/get_yolo_tiny_v2.sh

    The script creates a file called yolo_v2_tiny.pb in the current directory.

    Note:The helper script installs TensorFlow 1.15 if it is not already available on the host machine. For information about which versions of Python 3 are compatible with TensorFlow 1.15, see Install TensorFlow with pip. We tested this guide using Python 3.6.

  6. Copy the Tiny YOLO v2 neural network from the host machine to the HiKey 960.

    Use the scp command to copy the yolo_v2_tiny.pb file created by the previous step to the HiKey 960 device:

    cd $HOME/ML-examples/autoware-vision-detector
    scp yolo_v2_tiny.pb \
       arm01@<hikey_ip_address>:~/armnn-devenv/autoware/src/arm/vision_detector/models

    Note: For more information about how to convert YOLO graphs to the TensorFlow protobuf format, see the darkflow documentation.

  7. Build the vision detector node on the HiKey 960.

    To build the vision detector node, run the following commands:

    cd $HOME/armnn-devenv/autoware
    source /opt/ros/kinetic/setup.sh
    colcon build --packages-up-to vision_detector

    You can test the build by running unit tests, as follows:

    colcon test --packages-select
    vision_detector --event-handlers console_cohesion+
  8. Download the demonstration data on the HiKey 960.

    The vision detector node takes images from the /image_raw topic and outputs detections on the /vision_detector/objects topic.

    ROS uses message passing with topics for inter-process communication. In ROS, each node runs independently. One node writes, or publishes, messages into a topic while another node reads, or subscribes to, the messages of the same topic.

    To provide input for the demo, we download some images from the KITTI dataset and send them to the /image_raw topic, using the images_to_rosbag.py helper script. The KITTI data is in the form of a series of PNG image files. The images_to_rosbag.py script packages the images into a rosbag for easy integration into ROS. In ROS, all subscribed messages of ROS topics are logged and time-stamped to a .bag file structure called a rosbag. Developers can use the official rosbag tool to read, filter, and extract message data from rosbag files.

    Download the images and run the images_to_rosbag.py helper script as follows:

    cd $HOME/armnn-devenv
    mkdir images
    cd images
    wget https://s3.eu-central-1.amazonaws.com/ \
       avg-kitti/raw_data/2011_09_26_drive_0106/2011_09_26_drive_0106_sync.zip
    unzip 2011_09_26_drive_0106_sync.zip
    ../ML-examples/autoware-vision-detector/scripts/images_to_rosbag.py \
       2011_09_26/2011_09_26_drive_0106_sync/image_02/data/ demo.rosbag
    cp demo.rosbag $HOME/armnn-devenv/autoware/

    Note: The preceding code downloads 2011_09_26_drive_0106_sync.zip as an example, but you can choose any sequence from the KITTI dataset.

  9. Run the demo on the HiKey 960.

    To run the demo, we must do the following:

    • Launch the object detection node
    • Launch web_video_server to view the output
    • Play the newly created rosbag in a video loop

    A helper script, run_demo.sh, performs the tasks that are described in this step.

    Run the script from the root of your Autoware source tree, which is the same location where you copied the rosbag file, as follows:

    cd ~/armnn-devenv/autoware
    ~/armnn-devenv/ML-examples/autoware-vision-detector/scripts/run_demo.sh

    You will see continuous output on the command-line console to indicate that the demo is running.

  10. View the results in a web browser.

    Open the following URL in your favorite web browser:

    http://<ip address of the HiKey>:8080/stream?topic=/vision_detector/image_rects

    The web browser shows the looping video with real-time object detection, as you can see here:

Previous Next