How to Use AI Benchmark¶
The tutorial will guide you how to use AI Benchmark on ROScube series.
Introduction¶
We used AI Benchmark to evaluate GPU performance on ROScube series.
AI Benchmark Alpha is an open source python library for evaluating AI performance of various hardware platforms, including CPUs, GPUs and TPUs.
Usage¶
Here, we provided two usages on different ROScube series.
Note
ROScube-X series and ROScube-Pico series:¶
Install Jetpack
Install Tensorflow
Install ai-benchmark by terminal command
pip install ai-benchmark
Use the following python code to run the benchmark:
from ai_benchmark import AIBenchmark
results = AIBenchmark().run()
To run inference or training only, use benchmark.run_inference()
or benchmark.run_training()
.
ROScube-I series:¶
Requirement:
Python: 3.8
Keras: 2.6
TensorFlow: 2.6
Cuda: 11.4
CudNN: 8.2
Nvidia-driver: >= 470
Install GPU Driver
Download CUDA from Nvidia website.
Install Tensorflow by terminal command:
pip install tensorflow==<version>
Install ai-benchmark by terminal command:
pip install ai-benchmark
Use the following python code to run the benchmark:
from ai_benchmark import AIBenchmark
results = AIBenchmark().run()
To run inference or training only, use benchmark.run_inference()
or benchmark.run_training()
.
Results¶
In total, AI Benchmark consists of 42 tests and 19 sections.
After testing, you will get the Score of GPU performance:
Inference Score
Training Score
AI Score
Then go to the ranking page, and you can compare your device with open data.
And, we provided some testing data of ROScube:
Common Issue¶
When you run the python code, but you can’t show the CUDA version, like N/A.
Make sure install CUDA, you can find it in
/usr/local/cuda*
Gedit
.bashrc
by terminal command:
gedit ~/.bashrc
Add the CUDA’s path to
./bashrc
export PATH=/usr/local/cuda/bin:$PATH
export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
Refresh and check CUDA
source ~/.bashrc
nvcc -V