How to Use AI Benchmark

The tutorial will guide you how to use AI Benchmark on ROScube series.

Introduction

We used AI Benchmark to evaluate GPU performance on ROScube series.

AI Benchmark Alpha is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs.

Usage

Here, we provided two usages on different ROScube series.

Note

GPU with at least 2GB of RAM is required for running inference tests / 4GB of RAM for training tests.
The benchmark is compatible with both TensorFlow 1.x and 2.x versions.

ROScube-X series and ROScube-Pico series:

  1. Install Jetpack

  2. Install Tensorflow

  3. Install ai-benchmark by terminal command

pip install ai-benchmark
  1. Use the following python code to run the benchmark:

from ai_benchmark import AIBenchmark
results = AIBenchmark().run()

To run inference or training only, use benchmark.run_inference() or benchmark.run_training().

ROScube-I series:

Requirement:

  • Python: 3.8

  • Keras: 2.6

  • TensorFlow: 2.6

  • Cuda: 11.4

  • CudNN: 8.2

  • Nvidia-driver: >= 470

  1. Install GPU Driver

  2. Download CUDA from Nvidia website.

  3. Download and install cuDNN.

  4. Install Tensorflow by terminal command:

pip install tensorflow==<version>
  1. Install ai-benchmark by terminal command:

pip install ai-benchmark
  1. Use the following python code to run the benchmark:

from ai_benchmark import AIBenchmark
results = AIBenchmark().run()

To run inference or training only, use benchmark.run_inference() or benchmark.run_training().

Results

In total, AI Benchmark consists of 42 tests and 19 sections.

After testing, you will get the Score of GPU performance:

  • Inference Score

  • Training Score

  • AI Score

Then go to the ranking page, and you can compare your device with open data.

And, we provided some testing data of ROScube:

Common Issue

When you run the python code, but you can’t show the CUDA version, like N/A.

  1. Make sure install CUDA, you can find it in /usr/local/cuda*

  2. Gedit .bashrc by terminal command:

gedit ~/.bashrc
  1. Add the CUDA’s path to ./bashrc

export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
  1. Refresh and check CUDA

source ~/.bashrc
nvcc -V