| PPC-loo {bayesplot} | R Documentation |
Leave-One-Out (LOO) predictive checks. See the Plot Descriptions section below for details.
ppc_loo_pit(y, yrep, lw, pit, compare = c("uniform", "normal"), ...,
size = 2, alpha = 1)
ppc_loo_intervals(y, yrep, lw, intervals = NULL, ..., prob = 0.9,
size = 1, fatten = 3, order = c("index", "median"))
ppc_loo_ribbon(y, yrep, lw, intervals = NULL, ..., prob = 0.9,
alpha = 0.33, size = 0.25)
y |
A vector of observations. See Details. |
yrep |
An S by N matrix of draws from the posterior
predictive distribution, where S is the size of the posterior sample
(or subset of the posterior sample used to generate |
lw |
A matrix of (smoothed) log weights with the same dimensions as
|
pit |
For |
compare |
For |
... |
Currently unused. |
alpha, size, fatten |
Arguments passed to code geoms to control plot
aesthetics. For |
intervals |
For |
prob |
A value between 0 and 1 indicating the desired probability mass to include in the intervals. The default is 0.9. |
order |
For |
A ggplot object that can be further customized using the ggplot2 package.
ppc_loo_pitThe calibration of marginal predictions can be assessed using probability
integral transformation (PIT) checks. LOO improves the check by avoiding the
double use of data. See the section on marginal predictive checks in Gelman
et al. (2013, p. 152–153). The default LOO PIT predictive check is a
quantile-quantile (Q-Q) plot comparing the LOO PITs to the standard uniform
distribution. Comparing to a uniform distribution is not good for extreme
probabilities close to 0 and 1, so it can be useful to set the
compare argument to "normal", which will produce a Q-Q plot
comparing standardized PIT values to the standard normal distribution. This
can be helpful to see the calibration better for the extreme values.
ppc_loo_intervals, ppc_loo_ribbonSimilar to ppc_intervals and ppc_ribbon but the
intervals are for the LOO predictive distribution.
Gelman, A., Carlin, J. B., Stern, H. S., Dunson, D. B., Vehtari, A., and Rubin, D. B. (2013). Bayesian Data Analysis. Chapman & Hall/CRC Press, London, third edition. (p. 152–153)
Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413–1432. doi:10.1007/s11222-016-9696-4. arXiv preprint: http://arxiv.org/abs/1507.04544/
Other PPCs: PPC-discrete,
PPC-distributions,
PPC-errors, PPC-intervals,
PPC-overview,
PPC-scatterplots,
PPC-test-statistics
## Not run:
library(rstanarm)
library(loo)
head(radon)
fit <- stan_lmer(log_radon ~ floor + log_uranium + floor:log_uranium
+ (1 + floor | county), data = radon, cores = 2)
y <- radon$log_radon
yrep <- posterior_predict(fit)
psis <- psislw(-log_lik(fit), cores = 2)
# marginal predictive check using LOO probability integral transform
color_scheme_set("orange")
ppc_loo_pit(y, yrep, lw = psis$lw_smooth)
ppc_loo_pit(y, yrep, lw = psis$lw_smooth, compare = "normal")
# loo predictive intervals vs observations
sel <- 800:900
ppc_loo_intervals(y[sel], yrep[, sel], psis$lw_smooth[, sel],
prob = 0.9, size = 0.5)
color_scheme_set("gray")
ppc_loo_intervals(y[sel], yrep[, sel], psis$lw_smooth[, sel],
order = "median", prob = 0.8, size = 0.5)
## End(Not run)