pub struct Criterion<M: Measurement = WallTime> { /* private fields */ }
codspeed
only.Expand description
The benchmark manager
Criterion
lets you configure and execute benchmarks
Each benchmark consists of four phases:
- Warm-up: The routine is repeatedly executed, to let the CPU/OS/JIT/interpreter adapt to the new load
- Measurement: The routine is repeatedly executed, and timing information is collected into a sample
- Analysis: The sample is analyzed and distilled into meaningful statistics that get reported to stdout, stored in files, and plotted
- Comparison: The current sample is compared with the sample obtained in the previous benchmark.
Implementations§
Source§impl<M: Measurement> Criterion<M>
impl<M: Measurement> Criterion<M>
Sourcepub fn with_measurement<M2: Measurement>(self, m: M2) -> Criterion<M2>
pub fn with_measurement<M2: Measurement>(self, m: M2) -> Criterion<M2>
Changes the measurement for the benchmarks run with this runner. See the Measurement trait for more details
Sourcepub fn with_profiler<P: Profiler + 'static>(self, p: P) -> Criterion<M>
pub fn with_profiler<P: Profiler + 'static>(self, p: P) -> Criterion<M>
Changes the internal profiler for benchmarks run with this runner. See the Profiler trait for more details.
Sourcepub fn sample_size(self, n: usize) -> Criterion<M>
pub fn sample_size(self, n: usize) -> Criterion<M>
Changes the default size of the sample for benchmarks run with this runner.
A bigger sample should yield more accurate results if paired with a sufficiently large measurement time.
Sample size must be at least 10.
§Panics
Panics if n < 10
Sourcepub fn warm_up_time(self, dur: Duration) -> Criterion<M>
pub fn warm_up_time(self, dur: Duration) -> Criterion<M>
Changes the default warm up time for benchmarks run with this runner.
§Panics
Panics if the input duration is zero
Sourcepub fn measurement_time(self, dur: Duration) -> Criterion<M>
pub fn measurement_time(self, dur: Duration) -> Criterion<M>
Changes the default measurement time for benchmarks run with this runner.
With a longer time, the measurement will become more resilient to transitory peak loads caused by external programs
Note: If the measurement time is too “low”, Criterion will automatically increase it
§Panics
Panics if the input duration in zero
Sourcepub fn nresamples(self, n: usize) -> Criterion<M>
pub fn nresamples(self, n: usize) -> Criterion<M>
Changes the default number of resamples for benchmarks run with this runner.
Number of resamples to use for the bootstrap
A larger number of resamples reduces the random sampling errors, which are inherent to the bootstrap method, but also increases the analysis time
§Panics
Panics if the number of resamples is set to zero
Sourcepub fn noise_threshold(self, threshold: f64) -> Criterion<M>
pub fn noise_threshold(self, threshold: f64) -> Criterion<M>
Changes the default noise threshold for benchmarks run with this runner. The noise threshold is used to filter out small changes in performance, even if they are statistically significant. Sometimes benchmarking the same code twice will result in small but statistically significant differences solely because of noise. This provides a way to filter out some of these false positives at the cost of making it harder to detect small changes to the true performance of the benchmark.
The default is 0.01, meaning that changes smaller than 1% will be ignored.
§Panics
Panics if the threshold is set to a negative value
Sourcepub fn confidence_level(self, cl: f64) -> Criterion<M>
pub fn confidence_level(self, cl: f64) -> Criterion<M>
Changes the default confidence level for benchmarks run with this runner. The confidence level is the desired probability that the true runtime lies within the estimated confidence interval. The default is 0.95, meaning that the confidence interval should capture the true value 95% of the time.
§Panics
Panics if the confidence level is set to a value outside the (0, 1)
range
Sourcepub fn significance_level(self, sl: f64) -> Criterion<M>
pub fn significance_level(self, sl: f64) -> Criterion<M>
Changes the default significance level for benchmarks run with this runner. This is used to perform a hypothesis test to see if the measurements from this run are different from the measured performance of the last run. The significance level is the desired probability that two measurements of identical code will be considered ‘different’ due to noise in the measurements. The default value is 0.05, meaning that approximately 5% of identical benchmarks will register as different due to noise.
This presents a trade-off. By setting the significance level closer to 0.0, you can increase the statistical robustness against noise, but it also weakens Criterion.rs’ ability to detect small but real changes in the performance. By setting the significance level closer to 1.0, Criterion.rs will be more able to detect small true changes, but will also report more spurious differences.
See also the noise threshold setting.
§Panics
Panics if the significance level is set to a value outside the (0, 1)
range
Sourcepub fn save_baseline(self, baseline: String) -> Criterion<M>
pub fn save_baseline(self, baseline: String) -> Criterion<M>
Names an explicit baseline and enables overwriting the previous results.
Sourcepub fn retain_baseline(self, baseline: String, strict: bool) -> Criterion<M>
pub fn retain_baseline(self, baseline: String, strict: bool) -> Criterion<M>
Names an explicit baseline and disables overwriting the previous results.
Sourcepub fn with_benchmark_filter(self, filter: BenchmarkFilter) -> Criterion<M>
pub fn with_benchmark_filter(self, filter: BenchmarkFilter) -> Criterion<M>
Only run benchmarks specified by the given filter.
This overwrites [Self::with_filter
].
Sourcepub fn with_output_color(self, enabled: bool) -> Criterion<M>
pub fn with_output_color(self, enabled: bool) -> Criterion<M>
Override whether the CLI output will be colored or not. Usually you would use the --color
CLI argument, but this is available for programmmatic use as well.
Sourcepub fn configure_from_args(self) -> Criterion<M>
pub fn configure_from_args(self) -> Criterion<M>
Configure this criterion struct based on the command-line arguments to this process.
Sourcepub fn benchmark_group<S: Into<String>>(
&mut self,
group_name: S,
) -> BenchmarkGroup<'_, M>
pub fn benchmark_group<S: Into<String>>( &mut self, group_name: S, ) -> BenchmarkGroup<'_, M>
Return a benchmark group. All benchmarks performed using a benchmark group will be grouped together in the final report.
§Examples:
use self::criterion::*;
fn bench_simple(c: &mut Criterion) {
let mut group = c.benchmark_group("My Group");
// Now we can perform benchmarks with this group
group.bench_function("Bench 1", |b| b.iter(|| 1 ));
group.bench_function("Bench 2", |b| b.iter(|| 2 ));
group.finish();
}
criterion_group!(benches, bench_simple);
criterion_main!(benches);
§Panics:
Panics if the group name is empty
Source§impl<M> Criterion<M>where
M: Measurement + 'static,
impl<M> Criterion<M>where
M: Measurement + 'static,
Sourcepub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion<M>
pub fn bench_function<F>(&mut self, id: &str, f: F) -> &mut Criterion<M>
Benchmarks a function. For comparing multiple functions, see benchmark_group
.
§Example
use self::criterion::*;
fn bench(c: &mut Criterion) {
// Setup (construct data, allocate memory, etc)
c.bench_function(
"function_name",
|b| b.iter(|| {
// Code to benchmark goes here
}),
);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Sourcepub fn bench_with_input<F, I>(
&mut self,
id: BenchmarkId,
input: &I,
f: F,
) -> &mut Criterion<M>
pub fn bench_with_input<F, I>( &mut self, id: BenchmarkId, input: &I, f: F, ) -> &mut Criterion<M>
Benchmarks a function with an input. For comparing multiple functions or multiple inputs,
see benchmark_group
.
§Example
use self::criterion::*;
fn bench(c: &mut Criterion) {
// Setup (construct data, allocate memory, etc)
let input = 5u64;
c.bench_with_input(
BenchmarkId::new("function_name", input), &input,
|b, i| b.iter(|| {
// Code to benchmark using input `i` goes here
}),
);
}
criterion_group!(benches, bench);
criterion_main!(benches);
Trait Implementations§
Source§impl Default for Criterion
impl Default for Criterion
Source§fn default() -> Criterion
fn default() -> Criterion
Creates a benchmark manager with the following default settings:
- Sample size: 100 measurements
- Warm-up time: 3 s
- Measurement time: 5 s
- Bootstrap size: 100 000 resamples
- Noise threshold: 0.01 (1%)
- Confidence level: 0.95
- Significance level: 0.05
- Plotting: enabled, using gnuplot if available or plotters if gnuplot is not available
- No filter
Auto Trait Implementations§
impl<M> Freeze for Criterion<M>where
M: Freeze,
impl<M = WallTime> !RefUnwindSafe for Criterion<M>
impl<M = WallTime> !Send for Criterion<M>
impl<M = WallTime> !Sync for Criterion<M>
impl<M> Unpin for Criterion<M>where
M: Unpin,
impl<M = WallTime> !UnwindSafe for Criterion<M>
Blanket Implementations§
Source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere
T: ?Sized,
Source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Source§impl<T> Instrument for T
impl<T> Instrument for T
Source§fn instrument(self, span: Span) -> Instrumented<Self>
fn instrument(self, span: Span) -> Instrumented<Self>
Source§fn in_current_span(self) -> Instrumented<Self>
fn in_current_span(self) -> Instrumented<Self>
Source§impl<T> IntoEither for T
impl<T> IntoEither for T
Source§fn into_either(self, into_left: bool) -> Either<Self, Self>
fn into_either(self, into_left: bool) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left
is true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read moreSource§fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
fn into_either_with<F>(self, into_left: F) -> Either<Self, Self>
self
into a Left
variant of Either<Self, Self>
if into_left(&self)
returns true
.
Converts self
into a Right
variant of Either<Self, Self>
otherwise. Read more