RT-Bench
An Extensible Benchmark Framework for Real-Time Applications
|
RT-Bench is a collection of popular benchmarks for real-time applications which have been restructured to be executed periodically.
RT-Bench is licensed under MIT license and integrates benchmark suites that are licensed according to the information contained in the corresponding folders.
RT-Bench is developed by researchers and collaborators affiliated with the Cyber-Physical Systems Lab at Boston University with contributions from the Chair of Cyber-Physical System in Production Engineering at TUM.
Navigating the documentation of the project can be counterintuitive at first, so this section will guide the user towards making the most of the available documentation.
All the documentation is accessible from the sidebar, and includes:
This section will explain the reasoning behind RT-Bench and present at a high level of abstraction how the framework works.
The framework lives fully in userspace and is composed by the RT-Bench Generator and by a collection of scripts that compose the Utils optional layer.
Many popular benchmark suites do not exhibit real-time features and have to be restructured to integrate these features. RT-Bench is a framework that implements real-time features in a generic fashion, to allow different benchmarks (described in Available Benchmarks) to have the features out-of-the-box and accessible via CLI.
To implement the mentioned features, RT-Bench follow some core principles:
To adhere to the above-mentioned principles, the benchmarks are required to implement their logic in the following functions:
benchmark_init
: Initialization of the benchmark environment, executed only once.benchmark_execution
: Execution of the benchmark routines, executed periodically.benchmark_teardown
: Cleanup of the benchmark environment, executed before exiting.It is thus sufficient to split the benchmark main
function into these have a compatible benchmark. The effort to convert benchmark in this way depends on the benchmark logic, however for the whole San Diego Vision Benchmarks suite the conversion process, took ~300 SLOCs per benchmark.
Note: RT-Bench makes no assumption on what is executed by these functions, so individual benchmarks may have additional dependencies or behave in a non-standard way. These details will be documented in each the benchmark module page.
Once the benchmark is converted, compiled and linked against the RT-Bench generator the RT-Benchmark generator (specifically periodic_benchmark.c
) will handle the execution in the following way:
benchmark_execution
is running.benchmark_execution
is executed to prepare the benchmark environment.benchmark_execution
is executed.SIGINT
is received.benchmark_teardown
function will clear the benchmark environment and allow for a clean exit.The associated paper, available on ACM Digital Library and arXiv, contains more details on the design and some examples of what can be done with the framework.