| mark {bench} | R Documentation |
Benchmark a list of quoted expressions. Each expression will always run at least twice, once to measure the memory allocation and store results and one or more times to measure timing.
mark(..., min_time = 0.5, iterations = NULL, min_iterations = 1, max_iterations = 10000, check = TRUE, filter_gc = TRUE, relative = FALSE, exprs = NULL, env = parent.frame())
... |
Expressions to benchmark, if named the |
min_time |
The minimum number of seconds to run each expression, set to
|
iterations |
If not |
min_iterations |
Each expression will be evaluated a minimum of |
max_iterations |
Each expression will be evaluated a maximum of |
check |
Check if results are consistent. If |
filter_gc |
If |
relative |
If |
exprs |
A list of quoted expressions. If supplied overrides expressions
defined in |
env |
The environment which to evaluate the expressions |
A tibble with the additional summary columns. The following summary columns are computed
min - bench_time The minimum execution time.
mean - bench_time The arithmetic mean of execution time
median - bench_time The sample median of execution time.
max - bench_time The maximum execution time.
mem_alloc - bench_bytes Total amount of memory allocated by running the expression.
itr/sec - integer The estimated number of executions performed per second.
n_itr - integer Total number of iterations after filtering
garbage collections (if filter_gc == TRUE).
n_gc - integer Total number of garbage collections performed over all runs.
press() to run benchmarks across a grid of parameters.
dat <- data.frame(x = runif(100, 1, 1000), y=runif(10, 1, 1000)) mark( min_time = .1, dat[dat$x > 500, ], dat[which(dat$x > 500), ], subset(dat, x > 500))