pub enum BatchSize {
SmallInput,
LargeInput,
PerIteration,
NumBatches(u64),
NumIterations(u64),
// some variants omitted
}
Expand description
Argument to Bencher::iter_batched
and
Bencher::iter_batched_ref
which controls the
batch size.
Generally speaking, almost all benchmarks should use SmallInput
. If the input or the result
of the benchmark routine is large enough that SmallInput
causes out-of-memory errors,
LargeInput
can be used to reduce memory usage at the cost of increasing the measurement
overhead. If the input or the result is extremely large (or if it holds some
limited external resource like a file handle), PerIteration
will set the number of iterations
per batch to exactly one. PerIteration
can increase the measurement overhead substantially
and should be avoided wherever possible.
Each value lists an estimate of the measurement overhead. This is intended as a rough guide to assist in choosing an option, it should not be relied upon. In particular, it is not valid to subtract the listed overhead from the measurement and assume that the result represents the true runtime of a function. The actual measurement overhead for your specific benchmark depends on the details of the function you’re benchmarking and the hardware and operating system running the benchmark.
With that said, if the runtime of your function is small relative to the measurement overhead
it will be difficult to take accurate measurements. In this situation, the best option is to use
Bencher::iter
which has next-to-zero measurement overhead.
Variants
SmallInput
SmallInput
indicates that the input to the benchmark routine (the value returned from
the setup routine) is small enough that millions of values can be safely held in memory.
Always prefer SmallInput
unless the benchmark is using too much memory.
In testing, the maximum measurement overhead from benchmarking with SmallInput
is on the
order of 500 picoseconds. This is presented as a rough guide; your results may vary.
LargeInput
LargeInput
indicates that the input to the benchmark routine or the value returned from
that routine is large. This will reduce the memory usage but increase the measurement
overhead.
In testing, the maximum measurement overhead from benchmarking with LargeInput
is on the
order of 750 picoseconds. This is presented as a rough guide; your results may vary.
PerIteration
PerIteration
indicates that the input to the benchmark routine or the value returned from
that routine is extremely large or holds some limited resource, such that holding many values
in memory at once is infeasible. This provides the worst measurement overhead, but the
lowest memory usage.
In testing, the maximum measurement overhead from benchmarking with PerIteration
is on the
order of 350 nanoseconds or 350,000 picoseconds. This is presented as a rough guide; your
results may vary.
NumBatches(u64)
NumBatches
will attempt to divide the iterations up into a given number of batches.
A larger number of batches (and thus smaller batches) will reduce memory usage but increase
measurement overhead. This allows the user to choose their own tradeoff between memory usage
and measurement overhead, but care must be taken in tuning the number of batches. Most
benchmarks should use SmallInput
or LargeInput
instead.
NumIterations(u64)
NumIterations
fixes the batch size to a constant number, specified by the user. This
allows the user to choose their own tradeoff between overhead and memory usage, but care must
be taken in tuning the batch size. In general, the measurement overhead of NumIterations
will be larger than that of NumBatches
. Most benchmarks should use SmallInput
or
LargeInput
instead.
Trait Implementations
impl Copy for BatchSize
impl Eq for BatchSize
impl StructuralEq for BatchSize
impl StructuralPartialEq for BatchSize
Auto Trait Implementations
impl RefUnwindSafe for BatchSize
impl Send for BatchSize
impl Sync for BatchSize
impl Unpin for BatchSize
impl UnwindSafe for BatchSize
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcefn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more