#[binary_benchmark]
default
only.Expand description
Used to annotate functions building the to be benchmarked iai_callgrind::Command
This macro works almost the same way as the crate::library_benchmark
attribute. Please
see there for the basic usage.
§Differences to the #[library_benchmark]
attribute
Any config
parameter takes a BinaryBenchmarkConfig
instead of a LibraryBenchmarkConfig
.
All functions annotated with the #[binary_benchmark]
attribute need to return an
iai_callgrind::Command
. Also, the annotated function itself is not benchmarked. Instead, this
function serves the purpose of a builder for the Command
which is getting benchmarked.
So, any code within this function is evaluated only once when all Commands
in this benchmark
file are collected and built. You can put any code in the function which is necessary to build
the Command
without attributing any event counts to the benchmark results which is why the
setup
and teardown
parameters work differently in binary benchmarks.
The setup
and teardown
parameters of #[binary_benchmark]
, #[bench]
and of #[benches]
take an expression instead of a function pointer. The expression of the setup
(teardown
)
parameter is evaluated and executed not until before (after) the Command
is executed (not
built). There’s a special case if setup
or teardown
are a function pointer like in
library benchmarks. In this case the args
from #[bench]
or #[benches]
are passed to the
function AND setup
or teardown
respectively.
For example (Suppose your crate’s binary is named my-foo
)
use iai_callgrind::{BinaryBenchmarkConfig, Sandbox};
use std::path::PathBuf;
// In binary benchmarks there's no need to return a value from the setup function
fn simple_setup() {
println!("Put code in here which will be run before the actual command");
}
// It is good style to write any setup function idempotent, so it doesn't depend on the
// `teardown` to have run. The `teardown` function isn't executed if the benchmark
// command fails to run successfully.
fn create_file(path: &str) {
// You can for example create a file here which should be available for the `Command`
std::fs::File::create(path).unwrap();
}
fn teardown() {
// Let's clean up this temporary file after we have used it
std::fs::remove_file("file_from_setup_function.txt").unwrap();
}
#[binary_benchmark]
#[bench::just_a_fixture("benches/fixture.json")]
// First big difference to library benchmarks! `my_setup` is not evaluated right away and the
// return value of `simple_setup` is not used as input for the `bench_foo` function. Instead,
// `simple_setup()` is executed before the execution of the `Command`.
#[bench::with_other_fixture_and_setup(args = ("benches/other_fixture.txt"), setup = simple_setup())]
// Here, setup is a function pointer, what tells us to route `args` to `setup` AND `bench_foo`
#[bench::file_from_setup(args = ("file_from_setup_function.txt"), setup = create_file, teardown = teardown())]
// Just an small example for the basic usage of the `#[benches]` attribute
#[benches::multiple("benches/fix_1.txt", "benches/fix_2.txt")]
// We're using a `BinaryBenchmarkConfig` in binary benchmarks to configure these benchmarks to
// run in a sandbox.
#[benches::multiple_with_config(
args = ["benches/fix_1.txt", "benches/fix_2.txt"],
config = BinaryBenchmarkConfig::default()
.sandbox(Sandbox::new(true)
.fixtures(["benches/fix_1.txt", "benches/fix_2.txt"])
)
)]
// All functions annotated with `#[binary_benchmark]` need to return a `iai_callgrind::Command`
fn bench_foo(path: &str) -> iai_callgrind::Command {
let path = PathBuf::from(path);
// We can put any code in here which is needed to configure the `Command`.
let stdout = if path.extension().unwrap() == "txt" {
iai_callgrind::Stdio::Inherit
} else {
iai_callgrind::Stdio::File(path.with_extension("out"))
};
// Configure the command depending on the arguments passed to this function and the code
// above
iai_callgrind::Command::new(env!("CARGO_BIN_EXE_my-foo"))
.stdout(stdout)
.arg(path)
.build()
}