iai_callgrind

Attribute Macro library_benchmark

source
#[library_benchmark]
Available on crate feature default only.
Expand description

The #[library_benchmark] attribute lets you define a benchmark function which you can later use in the library_benchmark_groups! macro.

This attribute accepts the following parameters:

  • config: Accepts a LibraryBenchmarkConfig
  • setup: A global setup function which is applied to all following #[bench] and #[benches] attributes if not overwritten by a setup parameter of these attributes.
  • teardown: Similar to setup but takes a global teardown function.

A short introductory example on the usage including the setup parameter:

fn my_setup(value: u64) -> String {
    format!("{value}")
}

fn my_other_setup(value: u64) -> String {
    format!("{}", value + 10)
}

#[library_benchmark(setup = my_setup)]
#[bench::first(21)]
#[benches::multiple(42, 84)]
#[bench::last(args = (102), setup = my_other_setup)]
fn my_bench(value: String) {
    println!("{value}");
}

The #[library_benchmark] attribute can be applied in two ways.

  1. Using the #[library_benchmark] attribute as a standalone without #[bench] or #[benches] is fine for simple function calls without parameters.
  2. We mostly need to benchmark cases which would need to be setup for example with a vector, but everything we set up within the benchmark function itself would be attributed to the event counts. The second form of this attribute macro uses the #[bench] and #[benches] attributes to set up benchmarks with different cases. The main advantage is, that the setup costs and event counts aren’t attributed to the benchmark (and opposed to the old api we don’t have to deal with callgrind arguments, toggles, inline(never), …)

§The #[bench] attribute

The basic structure is #[bench::some_id(/* parameters */)]. The part after the :: must be an id unique within the same #[library_benchmark]. This attribute accepts the following parameters:

  • args: A tuple with a list of arguments which are passed to the benchmark function. The parentheses also need to be present if there is only a single argument (#[bench::my_id(args = (10))]).
  • config: Accepts a LibraryBenchmarkConfig
  • setup: A function which takes the arguments specified in the args parameter and passes its return value to the benchmark function.
  • teardown: A function which takes the return value of the benchmark function.

If no other parameters besides args are present you can simply pass the arguments as a list of values. Instead of #[bench::my_id(args = (10, 20))], you could also use the shorter #[bench::my_id(10, 20)].

// Assume this is a function in your library which you want to benchmark
fn some_func(value: u64) -> u64 {
    42
}

#[library_benchmark]
#[bench::some_id(42)]
fn bench_some_func(value: u64) -> u64 {
    std::hint::black_box(some_func(value))
}

§The #[benches] attribute

The #[benches] attribute lets you define multiple benchmarks in one go. This attribute accepts the same parameters as the #[bench] attribute: args, config, setup and teardown and additionally the file parameter. In contrast to the args parameter in #[bench], args takes an array of arguments. The id (#[benches::id(*/ parameters */)]) is getting suffixed with the index of the current element of the args array.

use std::hint::black_box;

fn setup_worst_case_array(start: i32) -> Vec<i32> {
    if start.is_negative() {
        (start..0).rev().collect()
    } else {
        (0..start).rev().collect()
    }
}

#[library_benchmark]
#[benches::multiple(vec![1], vec![5])]
#[benches::with_setup(args = [1, 5], setup = setup_worst_case_array)]
fn bench_bubble_sort_with_benches_attribute(input: Vec<i32>) -> Vec<i32> {
    black_box(my_lib::bubble_sort(input))
}

Usually the arguments are passed directly to the benchmarking function as it can be seen in the #[benches::multiple(...)] case. In #[benches::with_setup(...)], the arguments are passed to the setup function and the return value of the setup function is passed as argument to the benchmark function. The above #[library_benchmark] is pretty much the same as

use std::hint::black_box;

#[library_benchmark]
#[bench::multiple_0(vec![1])]
#[bench::multiple_1(vec![5])]
#[bench::with_setup_0(setup_worst_case_array(1))]
#[bench::with_setup_1(setup_worst_case_array(5))]
fn bench_bubble_sort_with_benches_attribute(input: Vec<i32>) -> Vec<i32> {
    black_box(bubble_sort(input))
}

but a lot more concise especially if a lot of values are passed to the same setup function.

The file parameter goes a step further and reads the specified file line by line creating a benchmark from each line. The line is passed to the benchmark function as String or if the setup parameter is also present to the setup function. A small example assuming you have a file benches/inputs (relative paths are interpreted to the workspace root) with the following content

1
11
111

then

use std::hint::black_box;
#[library_benchmark]
#[benches::by_file(file = "iai-callgrind-macros/fixtures/inputs")]
fn some_bench(line: String) -> Result<u64, String> {
    black_box(my_lib::string_to_u64(line))
}

The above is roughly equivalent to the following but with the args parameter

use std::hint::black_box;
#[library_benchmark]
#[benches::by_file(args = [1.to_string(), 11.to_string(), 111.to_string()])]
fn some_bench(line: String) -> Result<u64, String> {
    black_box(my_lib::string_to_u64(line))
}

§More Examples

The #[library_benchmark] attribute as a standalone

fn some_func() -> u64 {
    42
}

#[library_benchmark]
// If possible, it's best to return something from a benchmark function
fn bench_my_library_function() -> u64 {
    // The `black_box` is needed to tell the compiler to not optimize what's inside the
    // black_box or else the benchmarks might return inaccurate results.
    std::hint::black_box(some_func())
}

In the following example we pass a single argument with Vec<i32> type to the benchmark. All arguments are already wrapped in a black box and don’t need to be put in a black_box again.

// Our function we want to test
fn some_func_with_array(array: Vec<i32>) -> Vec<i32> {
    // do something with the array and return a new array
}

// This function is used to create a worst case array for our `some_func_with_array`
fn setup_worst_case_array(start: i32) -> Vec<i32> {
    if start.is_negative() {
        (start..0).rev().collect()
    } else {
        (0..start).rev().collect()
    }
}

// This benchmark is setting up multiple benchmark cases with the advantage that the setup
// costs for creating a vector (even if it is empty) aren't attributed to the benchmark and
// that the `array` is already wrapped in a black_box.
#[library_benchmark]
#[bench::empty(vec![])]
#[bench::worst_case_6(vec![6, 5, 4, 3, 2, 1])]
// Function calls are fine too
#[bench::worst_case_4000(setup_worst_case_array(4000))]
// The argument of the benchmark function defines the type of the argument from the `bench`
// cases.
fn bench_some_func_with_array(array: Vec<i32>) -> Vec<i32> {
    // Note `array` does not need to be put in a `black_box` because that's already done for
    // you.
    std::hint::black_box(some_func_with_array(array))
}

// The following benchmark uses the `#[benches]` attribute to setup multiple benchmark cases
// in one go
#[library_benchmark]
#[benches::multiple(vec![1], vec![5])]
// Reroute the `args` to a `setup` function and use the setup function's return value as
// input for the benchmarking function
#[benches::with_setup(args = [1, 5], setup = setup_worst_case_array)]
fn bench_using_the_benches_attribute(array: Vec<i32>) -> Vec<i32> {
    std::hint::black_box(some_func_with_array(array))
}