| Type: | Package |
| Title: | Bayesian Aggregate Treatment Effects |
| Version: | 0.7.11 |
| Maintainer: | Witold Wiecek <witold.wiecek@gmail.com> |
| Description: | Running and comparing meta-analyses of data with hierarchical Bayesian models in Stan, including convenience functions for formatting data, plotting and pooling measures specific to meta-analysis. This implements many models from Meager (2019) <doi:10.1257/app.20170299>. |
| License: | GPL (≥ 3) |
| Encoding: | UTF-8 |
| LazyData: | true |
| Biarch: | true |
| Depends: | R (≥ 3.5.0), Rcpp (≥ 0.12.17) |
| Imports: | rstan (≥ 2.26.0), rstantools (≥ 2.1.1), bayesplot, crayon,forestplot, ggplot2, ggplotify, ggrepel, gridExtra, utils,stats, testthat, methods |
| LinkingTo: | StanHeaders (≥ 2.26.0), rstan (≥ 2.26.0), BH (≥1.66.0-1), Rcpp (≥ 0.12.17), RcppParallel (≥ 5.0.1),RcppEigen (≥ 0.3.3.4.0) |
| SystemRequirements: | GNU make |
| NeedsCompilation: | yes |
| RoxygenNote: | 7.3.2 |
| Suggests: | knitr, covr, rmarkdown |
| VignetteBuilder: | knitr |
| URL: | https://github.com/wwiecek/baggr |
| BugReports: | https://github.com/wwiecek/baggr/issues |
| Language: | en-GB |
| Packaged: | 2025-06-19 05:00:30 UTC; witol |
| Author: | Witold Wiecek [cre, aut], Rachael Meager [aut], Brice Green [ctb] (loo_compare, many visuals), Danny Toomey [ctb] (many bug fixes), Trustees of Columbia University [cph] (package skeleton) |
| Repository: | CRAN |
| Date/Publication: | 2025-06-19 09:10:02 UTC |
baggr - a package for Bayesian meta-analysis
Description
This isbaggr (pronounced asbagger orbadger), a Bayesian meta-analysispackage for R that usesStan to fit the models.Baggr is intended to be user-friendly and transparent so thatit's easier to understand the models you are building and criticise them.
Details
Baggr package provides a suite of models that work with both summary data and full data sets,to synthesise evidence collected from different groups, contexts or time periods.Thebaggr command automatically detects the data type and, by default, fits a partialpooling model (which some users may know as random effects models)with weakly informative priors by callingStan to carryout Bayesian inference. Modelling of variances or quantiles, standardisation andtransformation of data are also possible.
Getting help
This is only a simple package help file.For documentation of the main function for conducting analyses seebaggr.For description of models, data types and priors available in the package,try the built-in vignette (vignette("baggr")).
Author(s)
Maintainer: Witold Wiecekwitold.wiecek@gmail.com
Authors:
Rachael Meager
Other contributors:
Brice Green (loo_compare, many visuals) [contributor]
Danny Toomey (many bug fixes) [contributor]
Trustees of Columbia University (package skeleton) [copyright holder]
References
Stan Development Team (2020). RStan: the R interface to Stan. R package version 2.21.2. https://mc-stan.org
See Also
Useful links:
Add colors to baggr plots
Description
Add colors to baggr plots
Usage
add_color_to_plot(p, what)Arguments
p | A ggplot object to add colors to |
what | A named vector, e.g. |
Bayesian aggregate treatment effects model
Description
Bayesian inference on parameters of an average treatment effects modelthat's appropriate to the suppliedindividual- or group-level data, using Hamiltonian Monte Carlo in Stan.(For overall package help file seebaggr-package)
Usage
baggr( data, model = NULL, pooling = c("partial", "none", "full"), effect_label = NULL, covariates = c(), prior_hypermean = NULL, prior_hypersd = NULL, prior_hypercor = NULL, prior_beta = NULL, prior_cluster = NULL, prior_control = NULL, prior_control_sd = NULL, prior_sigma = NULL, prior = NULL, ppd = FALSE, pooling_control = c("none", "partial", "remove"), test_data = NULL, quantiles = seq(0.05, 0.95, 0.1), outcome = "outcome", group = "group", treatment = "treatment", cluster = NULL, silent = FALSE, warn = TRUE, ...)Arguments
data | data frame with summary or individual level data to meta-analyse;see Details section for how to format your data |
model | if |
pooling | Type of pooling;choose from |
effect_label | How to label the effect(s). These labels are used in various print and plot outputs.Will default to |
covariates | Character vector with column names in |
prior_hypermean | prior distribution for hypermean; you can use "plain text" notation like |
prior_hypersd | prior for hyper-standard deviation, usedby Rubin and |
prior_hypercor | prior for hypercorrelation matrix, used by the |
prior_beta | prior for regression coefficients if |
prior_cluster | priors for SDs of cluster random effects in each study(i.e. assuming normal(0, sigma_k^2), with different sigma in each |
prior_control | prior for the mean in the control arm (baseline), currentlyused in |
prior_control_sd | prior for the SD in the control arm (baseline), currentlyused in |
prior_sigma | prior for error terms in linear regression models ( |
prior | alternative way to specify all priors as a named list with |
ppd | logical; use prior predictive distribution? (p.p.d.)If |
pooling_control | Pooling for group-specific control mean terms in models usingindividual-level data. Typically we use either |
test_data | data for cross-validation; NULL for no validation, otherwise a data framewith the same columns as |
quantiles | if |
outcome | column name in |
group | column name in |
treatment | column name in (individual-level) |
cluster | optional; column name in (individual-level) |
silent | Whether to silence messages about prior settings and about other automatic behaviour. |
warn | print an additional warning if Rhat exceeds 1.05 |
... | extra options passed to Stan function, e.g. |
Details
Below we briefly discuss 1/ data preparation, 2/ choice of model, 3/ choice of priors.All three are discussed in more depth in the package vignette,vignette("baggr").
Data. For aggregate data models you need a data frame with columnstau andse (Rubin model) ortau,mu,se.tau,se.mu ("mu & tau" model).An additional column can be used to provide labels for each group(by default columngroup is used if available, but this can becustomised – see the example below).For individual level data three columns are needed: outcome, treatment, group. Theseare identified by using theoutcome,treatment andgroup arguments.
Many data preparation steps can be done through a helper functionprepare_ma.It can convert individual to summary-level data, calculateodds/risk ratios (with/without corrections) in binary data, standardise variables and more.Using it will automatically format data inputs to work withbaggr().
Models. Available models are:
for thecontinuous variable means:
"rubin"model for average treatment effect (using summary data),"mutau"version which takes into account means of control groups (also using summary data),"rubin_full", which is the same model as"rubin"but works with individual-level dataforbinary data:
"logit"model can be used on individual-level data;you can also analyse continuous statistics such aslog odds ratios and logs risk ratios using the models listed above;seevignette("baggr_binary")for tutorial with examples
If no model is specified, the function tries to infer the appropriatemodel automatically.Additionally, the user must specify type of pooling.The default is always partial pooling.
Covariates.Both aggregate and individual-level data can include extra columns, given bycovariates argument(specified as a character vector of column names) to be used in regression models.We also refer to impact of these covariates asfixed effects.
Two types of covariates may be present in your data:
In
"rubin"and"mutau"models, covariates thatchange according to group unit.In that case, the model accountingfor the group covariates is ameta-regressionmodel. It can be modelled on summary-level data.In
"logit"and"rubin_full"models, covariates thatchange according to individual unit.Then, such a model is often referred to as a mixed model. It has to befitted to individual-level data. Note that meta-regression is a specialcase of a mixed model for individual-level data.
Priors. It is optional to specify priors yourself,as the package will try propose an appropriateprior for the input data if you do not pass aprior argument.To set the priors yourself, useprior_ arguments. For specifying many priors at once(or re-using between models), a singleprior = list(...) argument can be used instead.Meaning of the prior parameters may slightly change from model to model.Details and examples are given invignette("baggr").Settingppd=TRUE can be used to obtain prior predictive distributions,which is useful for understanding the prior assumptions,especially useful in conjunction witheffect_plot. You can alsobaggr_comparedifferent priors by settingbaggr_compare(..., compare="prior").
Cross-validation. Whentest_data are specified, an extra parameter, thelog predictive density, will be returned by the model.(The fitted model itself is the same regardless of whether there aretest_data.)To understand this parameter, see documentation ofloocv, a function thatcan be used to assess out of sample prediction of the model using all available data.If using individual-level data model,test_data should only include treatment armsof the groups of interest. (This is because in cross-validation we are not typicallyinterested in the model's ability to fit heterogeneity in control arms, butonly heterogeneity in treatment arms.)For using aggregate level data, there is no such restriction.
Outputs. By default, some outputs are printed. There is also aplot method forbaggr objects which you can access viabaggr_plot (or simplyplot()).Other standard functions for working withbaggr object are
treatment_effect for distribution of hyperparameters
group_effects for distributions of group-specific parameters (alias:study_effects, we use the two interchangeably)
fixed_effects for coefficients in (meta-)regression
effect_draw andeffect_plot for posterior predictive distributions
baggr_compare for comparing multiple
baggrmodelsloocv for cross-validation
Value
baggr class structure: a list including Stan model fitalongside input data, pooling metrics, various model properties.If test data is used, mean value of -2*lpd is reported asmean_lpd
Examples
df_pooled <- data.frame("tau" = c(1, -1, .5, -.5, .7, -.7, 1.3, -1.3), "se" = rep(1, 8), "state" = datasets::state.name[1:8])baggr(df_pooled) #baggr automatically detects the input data# same model, but with correct labels,# different pooling & passing some options to Stanbaggr(df_pooled, group = "state", pooling = "full", iter = 500)# model with non-default (and very informative) priorsbaggr(df_pooled, prior_hypersd = normal(0, 2))# "mu & tau" model, using a built-in dataset# prepare_ma() can summarise individual-level datams <- microcredit_simplifiedmicrocredit_summary_data <- prepare_ma(ms, outcome = "consumption")baggr(microcredit_summary_data, model = "mutau", iter = 500, #this is just for illustration -- don't set it this low normally! pooling = "partial", prior_hypercor = lkj(1), prior_hypersd = normal(0,10), prior_hypermean = multinormal(c(0,0),matrix(c(10,3,3,10),2,2)))(Run and) compare multiple baggr models
Description
Compare multiplebaggr models by eitherproviding multiple already existing models as (named) arguments orpassing parameters necessary to run abaggr model.
Usage
baggr_compare( ..., what = "pooling", compare = c("groups", "hyperpars", "effects"), transform = NULL, prob = 0.95, plot = FALSE)Arguments
... | Either some (at least 1) objects of class |
what | One of |
compare | When plotting, choose between comparison of |
transform | a function (e.g. exp(), log()) to apply tothe the sample of group (and hyper, if |
prob | Width of uncertainty interval (defaults to 95%) |
plot | logical; callsplot.baggr_compare when running |
Details
If you pass parameters to the function you must specifywhat kind of comparison you want, either"pooling", whichwill run fully/partially/un-pooled models and then compare them,or"prior" which will generate estimates without the dataand compare them to the model with the full data. For moredetails seebaggr, specifically theppd argument.
Value
an object of classbaggr_compare
Author(s)
Witold Wiecek, Brice Green
See Also
plot.baggr_compare andprint.baggr_comparefor working with results of this function
Examples
# Most basic comparison between no, partial and full pooling# (This will run the models)# run model with just prior and then full data for comparison# with the same arguments that are passed to baggrprior_comparison <- baggr_compare(schools, model = 'rubin', #this is just for illustration -- don't set it this low normally! iter = 500, prior_hypermean = normal(0, 3), prior_hypersd = normal(0,2), prior_hypercor = lkj(2), what = "prior")# print the aggregated treatment effectsprior_comparison# plot the comparison of the two distributionsplot(prior_comparison)# Now compare different types of pooling for the same modelpooling_comparison <- baggr_compare(schools, model = 'rubin', #this is just for illustration -- don't set it this low normally! iter = 500, prior_hypermean = normal(0, 3), prior_hypersd = normal(0,2), prior_hypercor = lkj(2), what = "pooling", # You can automatically plot: plot = TRUE)# Compare existing models (you don't have to, but best to name them):bg1 <- baggr(schools, pooling = "partial")bg2 <- baggr(schools, pooling = "full")baggr_compare("Partial pooling model" = bg1, "Full pooling" = bg2)#' ...or simply draw from prior predictive dist (note ppd=T)bg1 <- baggr(schools, ppd=TRUE)bg2 <- baggr(schools, prior_hypermean = normal(0, 5), ppd=TRUE)baggr_compare("Prior A, p.p.d."=bg1, "Prior B p.p.d."=bg2, compare = "effects")# Compare how posterior predictive effect varies with e.g. choice of priorbg1 <- baggr(schools, prior_hypersd = uniform(0, 20))bg2 <- baggr(schools, prior_hypersd = normal(0, 5))baggr_compare("Uniform prior on SD"=bg1, "Normal prior on SD"=bg2, compare = "effects", plot = TRUE)# Models don't have to be identical. Compare different subsets of input data:bg1_small <- baggr(schools[1:6,], pooling = "partial")baggr_compare("8 schools model" = bg1, "First 6 schools" = bg1_small, plot = TRUE)Plotting method in baggr package
Description
Extracts study effects from thebaggr model and plots them,possibly next to the hypereffect estimate.
Usage
baggr_plot( bg, hyper = FALSE, style = c("intervals", "areas", "forest_plot"), transform = NULL, prob = 0.5, prob_outer = 0.95, vline = TRUE, order = TRUE, values_outer = TRUE, values_size = 4, values_digits = 1, ...)Arguments
bg | object of class |
hyper | logical; show hypereffect as the last row of the plot?alternatively you can pass colour for the hypermean row,e.g. |
style |
|
transform | a function (e.g. |
prob | Probability mass for the inner interval in visualisation |
prob_outer | Probability mass for the outer interval in visualisation |
vline | logical; show vertical line through 0 in the plot? |
order | logical; sort groups by magnitude of treatment effect? |
values_outer | logical; use the interval corresponding to |
values_size | size of the text values in the plot when |
values_digits | number of significant digits to use when |
... | extra arguments to pass to the |
Value
ggplot2 object
Author(s)
Witold Wiecek; the visual style is based onbayesplot package
See Also
bayesplot::MCMC-intervals for more information aboutbayesplot functionality;forest_plot for a typical meta-analysis alternative (which you can imitate usingstyle = "forest_plot");effect_plot for plotting treatment effects for a new group
Examples
fit <- baggr(schools, pooling = "none")plot(fit, hyper = "red")plot(fit, style = "areas", order = FALSE)plot(fit, style = "forest_plot", order = FALSE)Set, get, and replace themes for baggr plots
Description
These functions get, set, and modify the ggplot2 themesof the baggr plots.baggr_theme_get() returns a ggplot2 theme function foradding themes to a plot.baggr_theme_set() assigns a new themefor all plots of baggr objects.baggr_theme_update() edits a specifictheme element for the current theme while holding the theme'sother aspects constant.baggr_theme_replace() is used forwholesale replacing aspects of a plot's theme (seeggplot2::theme_get()).
Usage
baggr_theme_set(new = bayesplot::theme_default())baggr_theme_get()baggr_theme_update(...)baggr_theme_replace(...)Arguments
new | New theme to use for all baggr plots |
... | A named list of theme settings |
Details
Under the hood, many of the visualizations rely on thebayesplot package, and thus these leverage thebayesplot::bayesplot_theme_get()functions. By default, these match the bayesplot's packagetheme to make it easier to form cohesive graphs across this packageand others. The trickiest of these to use isbaggr_theme_replace;9 times out of 10 you want baggr_theme_update.
Value
The get method returns the current theme, but all of theothers invisibly return the old theme.
See Also
bayesplot::bayesplot_theme_get
Examples
# make plot look like default ggplotslibrary(ggplot2)fit <- baggr(schools)baggr_theme_set(theme_grey())baggr_plot(fit)# use baggr_theme_get to return theme elements for current themeqplot(mtcars$mpg) + baggr_theme_get()# update specific aspect of theme you are interested inbaggr_theme_update(text = element_text(family = "mono"))# undo that sillinessbaggr_theme_update(text = element_text(family = "serif"))# update and replace are similar, but replace overwrites the# whole element, update just edits the aspect of the element# that you give it# this will error:# baggr_theme_replace(text = element_text(family = "Times"))# baggr_plot(fit)# because it deleted everything else to do with text elementsGenerate individual-level binary outcome data from an aggregate statistics
Description
This is a helper function that is typically used automatically by some ofbaggr functions,such as when runningmodel="logit" inbaggr, when summary-level data are supplied.
Usage
binary_to_individual( data, group = "group", covariates = c(), rename_group = TRUE)Arguments
data | A data frame with columns |
group | Column name storing group |
covariates | Column names in |
rename_group | If See |
Value
A data frame with columnsgroup,outcome andtreatment.
See Also
prepare_ma uses this function
Examples
df_yusuf <- read.table(text=" trial a n1i c n2i Balcon 14 56 15 58 Clausen 18 66 19 64 Multicentre 15 100 12 95 Barber 10 52 12 47 Norris 21 226 24 228 Kahler 3 38 6 31 Ledwich 2 20 3 20 ", header=TRUE)bti <- binary_to_individual(df_yusuf, group = "trial")head(bti)# to go back to summary-level dataprepare_ma(bti, effect = "logOR")# the last operation is equivalent to simply doingprepare_ma(df_yusuf, group="trial", effect="logOR")Bubble plots for meta-regression models
Description
Bubble plots for meta-regression models
Usage
bubble(bg, covariate, fit = TRUE, label = TRUE)Arguments
bg | a |
covariate | one of the covariates present in the model |
fit | logical: show mean model prediction? (slope is mean estimate of |
label | logical: label study/group names? |
Value
A simple bubble plot inggplot style.Dot sizes are proportional to inverse of variance of each study (more precise studies are larger).
See Also
labbe() for an exploratory plot of binary data in similar style
Chickens: impact of electromagnetic field on calcium ion efflux in chicken brains
Description
An experiment conducted by Blackman et al. (1988) and documented in the followingGitHub repository by Vakarand Gelman. The dataset consists of a large number of experiments (tau,se.tau)repeated at varying wave frequencies. Sham experiments (mu,se.mu) are alsoincluded, allowing us to compare performance of models with and withoutcontrol measurements.
Usage
chicksFormat
An object of classdata.frame with 38 rows and 7 columns.
References
Blackman, C. F., S. G. Benane, D. J. Elliott, D. E. House, andM. M. Pollock.“Influence of Electromagnetic Fields on the Efflux of Calcium Ions from BrainTissue in Vitro: A Three-Model Analysis Consistent with the FrequencyResponse up to 510 Hz.” Bioelectromagnetics 9, no. 3 (1988): 215–27.
Convert inputs for baggr models
Description
Converts data to a list of inputs suitable for Stan models,checks integrity of data and suggests the appropriate default modelif needed. Typically all of this isdone automatically bybaggr, sothis function is included only for debuggingor running (custom) models "by hand".
Usage
convert_inputs( data, model, quantiles, effect = NULL, group = "group", outcome = "outcome", treatment = "treatment", cluster = NULL, covariates = c(), test_data = NULL, silent = FALSE)Arguments
data | 'data.frame“ with desired modelling input |
model | valid model name used by baggr;seebaggr for allowed modelsif |
quantiles | vector of quantiles to use (only applicable if |
effect | Only matters for binary data, use |
group | name of the column with grouping variable |
outcome | name of column with outcome variable (designated as string) |
treatment | name of column with treatment variable |
cluster | name of the column with clustering variable for analysing c-RCTs |
covariates | Character vector with column names in |
test_data | same format as |
silent | Whether to print messages when evaluated |
Details
Typically this function is only called withinbaggr and you donot need to use it yourself. It can be useful to understand inputsor to run models which you modified yourself.
Value
R structure that's appropriate for use bybaggr Stan models;group_label,model,effect andn_groups are included as attributesand are necessary forbaggr to work correctly
Author(s)
Witold Wiecek
Examples
# simple meta-analysis example,# this is the formatted input for Stan models in baggr():convert_inputs(schools, "rubin")Spike & slab example dataset
Description
Spike & slab example dataset
Usage
data_spikeFormat
An object of classdata.frame with 1500 rows and 4 columns.
Make predictive draws from baggr model
Description
The functioneffect_draw and its alias,posterior_predict, take the sampleof hyperparameters from abaggr model(typically hypermean and hyper-SD, which you can see usingtreatment_effect)and draws values of new realisations of treatment effect, i.e. an additional draw from the "population of studies".This can be used for both prior and posterior draws, depending onbaggr model.By default this is done for a single new effect, but for meta-regression modelsyou can specify values of covariates with thenewdata argument, same as inpredict.
Usage
effect_draw( object, draws = NULL, newdata = NULL, transform = NULL, summary = FALSE, message = TRUE, interval = 0.95)Arguments
object | A |
draws | How many values to draw? The default is as long as the number of samplesin the |
newdata | an optional data frame containing new values of covariatesthat were used when fitting the |
transform | a transformation (an R function) to apply to the result of a draw. |
summary | logical; if TRUE returns summary statistics rather than samples from the distribution; |
message | logical; use to disable messages prompted by using this function withno pooling models |
interval | uncertainty interval width (numeric between 0 and 1), if |
Details
The predictive distribution can be used to "combine" heterogeneity between treatment effects anduncertainty in the mean treatment effect. This is useful both in understanding impact ofheterogeneity (see Riley et al, 2011, for a simple introduction) and for study design e.g.as priors in analysis of future data (since the draws can be seen as an expected treatment effectin a hypothetical study).
The default number of samples is the same as what is returned by Stan model implemented inbaggr,(depending on such options asiter,chains,thin). Ifn is larger than what is availablein Stan model, we draw values with replacement. This is not recommended and warning is printed inthese cases.
Under default settings inbaggr, aposterior predictive distribution is obtained. Buteffect_draw can also be used forprior predictive distributions whensettingppd=T inbaggr. The two outputs work exactly the same way.
If thebaggr model used by the function is a meta-regression(i.e. abaggr model withcovariates), by specifyingthe predicted values can be adjusted for known levels of fixed covariates bypassingnewdata (same as inpredict). If no adjustment is made, thereturned value should be interpreted as the effect when all covariates are 0.
Value
A vector (withdraws values) for models with one treatment effect parameter,a matrix (draws rows and same number of columns as number of parameters) otherwise.Ifnewdata are specified, an array is returned instead, where the first dimensioncorresponds to rows ofnewdata.
References
Riley, Richard D., Julian P. T. Higgins, and Jonathan J. Deeks."Interpretation of Random Effects Meta-Analyses".BMJ 342 (10 February 2011)..
See Also
treatment_effect returns samples from hypermean(s) and hyper-SD(s)which are used by this function
Plot predictive draws from baggr model
Description
This function plots values fromeffect_draw, the predictive distribution(under default settings,posterior predictive),for one or morebaggr objects.
Usage
effect_plot(..., transform = NULL)Arguments
... | Object(s) of classbaggr. If there is more than one,a comparison will be plotted and names of objectswill be used as a plot legend (see examples). |
transform | a transformation to apply to the result, should be an R function;(this is commonly used when callinggroup_effects from otherplotting or printing functions) |
Details
Under default settings inbaggr posterior predictive is obtained. Buteffect_plot can also be used forprior predictive distributions whensettingppd=T inbaggr. The two outputs work exactly the same, butlabels will change to indicate this difference.
Value
Aggplot object.
See Also
effect_draw documents the process of drawing values;baggr_compare can be used as a shortcut foreffect_plot with argumentcompare = "effects"
Examples
# A single effects plotbg1 <- baggr(schools, prior_hypersd = uniform(0, 20))effect_plot(bg1)# Compare how posterior depends on the prior choicebg2 <- baggr(schools, prior_hypersd = normal(0, 5))effect_plot("Uniform prior on SD"=bg1, "Normal prior on SD"=bg2)# Compare the priors themselves (ppd=T)bg1_ppd <- baggr(schools, prior_hypersd = uniform(0, 20), ppd=TRUE)bg2_ppd <- baggr(schools, prior_hypersd = normal(0, 5), ppd=TRUE)effect_plot("Uniform prior on SD"=bg1_ppd, "Normal prior on SD"=bg2_ppd)Effects of covariates on outcome in baggr models
Description
Effects of covariates on outcome in baggr models
Usage
fixed_effects(bg, summary = FALSE, transform = NULL, interval = 0.95)Arguments
bg | abaggr model |
summary | logical; if |
transform | a transformation (R function) to apply to the result;(this is commonly used when calling from otherplotting or printing functions) |
interval | uncertainty interval width (numeric between 0 and 1), if |
Value
A matrix: columns are covariate coefficients and rows are draws from the posterior distribution.Number of rows depends on iterations in the MCMC (i.e.x in baggr(..., iter = x')
See Also
treatment_effect for overall treatment effect across groups,group_effects for effects within each group,effect_draw andeffect_plot for predicted treatment effect in new group(which you can condition on fixed effects using new data argument)
Draw a forest plot for a baggr model
Description
The forest plot functionality inbaggr is a simple interface forcallingforestplot By default the forest plotdisplays raw (unpooled) estimates for groups and the treatment effectestimate underneath. This behaviour can be modified to display pooledgroup estimates.
Usage
forest_plot( bg, show = c("inputs", "posterior", "both", "covariates"), print = show, prob = 0.95, digits = 3, ...)Arguments
bg | abaggr class object |
show | if |
print | which values to print next to the plot: values of |
prob | width of the intervals (lines) for the plot |
digits | number of digits to display when printing out mean and SDin the plot |
... | other arguments passed toforestplot |
See Also
forestplot function and its vignette for examples;effect_plot andbaggr_plot for non-forest plots of baggr results
Examples
bg <- baggr(schools, iter = 500)forest_plot(bg)forest_plot(bg, show = "posterior", print = "inputs", digits = 2)Separate out ordering so we can test directly
Description
Separate out ordering so we can test directly
Usage
get_order(df_groups, hyper)Arguments
df_groups | data.frame of group effects used inplot.baggr_compare |
hyper | show parameter estimate? same as inplot.baggr_compare |
Details
Given a set of effects measured by models, identifies themodel which has the biggest range of estimates and ranks groupsby those estimates, returning the order
Extract baggr study/group effects
Description
Given a baggr object, returns the raw MCMC draws of the posterior foreach group's effect or a summary of these draws. (We use "group" and "study" interchangeably.)If there are no covariates in the model, this effect is a single random variable.If there are covariates, the group effect is a sum of effect of covariates (fixed effects)and the study-specific random variable (random effects).This is an internal function currently used as a helper for plotting andprinting of results.
Usage
group_effects( bg, summary = FALSE, transform = NULL, interval = 0.95, random_only = FALSE, rename_int = FALSE)study_effects( bg, summary = FALSE, transform = NULL, interval = 0.95, random_only = FALSE, rename_int = FALSE)Arguments
bg | baggr object |
summary | logical; if |
transform | a transformation to apply to the result, should be an R function;(this is commonly used when calling |
interval | uncertainty interval width (numeric between 0 and 1), if summarising |
random_only | logical; for meta-regression models, shouldfixed_effects be included in thereturned group effect? |
rename_int | logical; if |
Details
Ifsummary = TRUE, the returned object contains, for each studyor group, the following 5 values:the posterior medians, the lower and upper bounds of theuncertainty intervals using the central posterior credible intervalof width specified in the argumentinterval, the posterior mean, andthe posterior standard deviation.
Value
Either an array with MCMC samples (ifsummary = FALSE)or a summary of these samples (ifsummary = TRUE).For arrays the three dimensions are: N samples, N groups and N effects(equal to 1 for the basic models).
See Also
fixed_effects for effects of covariates on outcome. To extract random effectswhen covariates are present, you can use eitherrandom_effects or, equivalently,group_effects(random_only=TRUE).
Examples
fit1 <- baggr(schools)group_effects(fit1, summary = TRUE, interval = 0.5)Check if something is a baggr_cv object
Description
Check if something is a baggr_cv object
Usage
is.baggr_cv(x)Arguments
x | object to check |
L'Abbe plot for binary data
Description
This plot shows relationship between proportions of events in control and treatment groups in binary data.
Usage
labbe( data, group = "group", plot_model = FALSE, labels = TRUE, shade_se = c("rr", "or", "none"))Arguments
data | a data frame with binary data(must have columns |
group | a character string specifying group names (e.g. study names), used for labels; |
plot_model | if |
labels | if |
shade_se | if |
Value
Aggplot object
See Also
vignette("baggr_binary") for an illustrative example
Compare LOO CV models
Description
Given multipleloocv outputs, calculate differences in their expected logpredictive density.
Usage
loo_compare(...)Arguments
... | A series of |
Value
Returns a series of comparisons in order of the arguments provided as Model 1 - Model N forN loocv objects provided. Model 1 corresponds to the first object passed andModel N corresponds to the Nth object passed.
See Also
loocv for fitting LOO CV objects and explanation of the procedure;loo package by Vehtari et al (available on CRAN) for a more comprehensive approach
Examples
## Not run: # 2 models with more/less informative priors -- this will take a while to runcv_1 <- loocv(schools, model = "rubin", pooling = "partial")cv_2 <- loocv(schools, model = "rubin", pooling = "partial", prior_hypermean = normal(0, 5), prior_hypersd = cauchy(0,2.5))loo_compare("Default prior"=cv_1,"Alternative prior"=cv_2)## End(Not run)Leave one group out cross-validation forbaggr models
Description
Performs exact leave-one-group-out cross-validation on a baggr model.
Usage
loocv(data, return_models = FALSE, ...)Arguments
data | Input data frame - same as forbaggr function. |
return_models | logical; if FALSE, summary statistics will be returned and themodels discarded;if TRUE, a list of models will be returned alongside summaries |
... | Additional arguments passed tobaggr. |
Details
The values returned byloocv() can be used to understand how excludingany one group affects the overall result, as well as how well the modelpredicts the omitted group. LOO-CV approaches are a good general practicefor comparing Bayesian models, not only in meta-analysis.
To learn about cross-validation see Gelman et al 2014.
This function automatically runsK baggr models, whereK is number of groups (e.g. studies),leaving out one group at a time. For each run, it calculatesexpected log predictive density (ELPD) for that group (see Gelman et al 2013).(In the logistic model, where the proportion in control group is unknown, each ofthe groups is divided into data for controls, which is kept for estimation, and data fortreated units, which is not used for estimation but only for calculating predictive density.This is akin to fixing the baseline risk and only trying to infer the odds ratio.)
The main output is the cross-validationinformation criterion, or -2 times the ELPD summed overK models.(We sum the terms as we are working with logarithms.)This is related to, and often approximated by, the Watanabe-AkaikeInformation Criterion. When comparing models, smaller values meana better fit.
For running more computation-intensive models, consider setting themc.cores option before running loocv, e.g.options(mc.cores = 4)(by default baggr runs 4 MCMC chains in parallel).As a default, rstan runs "silently" (refresh=0).To see sampling progress, please set e.g.loocv(data, refresh = 500).
Value
log predictive density value, an object of classbaggr_cv;full model, prior values andlpd of each model are also returned.These can be examined by usingattributes() function.
Author(s)
Witold Wiecek
References
Gelman, Andrew, Jessica Hwang, and Aki Vehtari.'Understanding Predictive Information Criteria for Bayesian Models.'Statistics and Computing 24, no. 6 (November 2014): 997–1016.
See Also
loo_compare for comparison of many LOO CV results; you can print and plotoutput viaplot.baggr_cv andprint.baggr_cv
Examples
## Not run: # even simple examples may take a whilecv <- loocv(schools, pooling = "partial")print(cv) # returns the lpd valueattributes(cv) # more information is included in the object## End(Not run)7 studies on effect of microcredit supply
Description
This dataframe contains the data used in Meager (2019) to estimate hierarchicalmodels on the data from 7 randomized controlled trials of expanding access to microcredit.
Usage
microcreditFormat
A data frame with 40267 rows, 7 study identifiers and 7 outcomes
Details
The columns include the group indicator which gives the name of the lead authoron each of the respective studies, the value of the 6 outcome variables of mostinterest (consumer durables spending, business expenditures, business profit,business revenues, temptation goods spending and consumption spending) all ofwhich are standardised to USD PPP in 2009 dollars per two weeks (these are flow variables),and finally a treatment assignment status indicator.
The dataset has not otherwise been cleaned and therefore includes NAs and otherissues common to real-world datasets.
For more information on how and why these variables were chosen and standardised,see Meager (2019) or consult the associated code repository which includes thestandardisation scripts:link
References
Meager, Rachael (2019) Understanding the average impact of microcredit expansions:A Bayesian hierarchical analysis of seven randomized experiments.American Economic Journal: Applied Economics, 11(1), 57-91.
Simplified version of the microcredit dataset.
Description
This dataframe contains the data used in Meager (2019) to estimate hierarchicalmodels on the data from 7 randomized controlled trials of expanding access to microcredit.
Usage
microcredit_simplifiedFormat
A data frame with 14224 rows, 7 study identifiers and 1 outcome
Details
The columns include the group indicator which gives the name of the lead author oneach of the respective studies, the value of the household consumptionspending standardised to USD PPP in 2009 dollars per two weeks (these are flow variables),and finally a treatment assignment status indicator.
The dataset has not otherwise been cleaned and therefore includes NAs and otherissues common to real data.
For more information on how and why these variables were chosen and standardised,see Meager (2019) or consult the associated code repository:link
This dataset includes only complete cases and only the consumption outcome variable.
References
Meager, Rachael (2019) Understanding the average impact of microcredit expansions:A Bayesian hierarchical analysis of seven randomized experiments. American Economic Journal:Applied Economics, 11(1), 57-91.
"Mean and interval" function, including other summaries, calculated for matrix (by column) or vector
Description
This function is just a convenient shorthand for getting typical summary statistics.
Usage
mint(y, int = 0.95, digits = NULL, median = FALSE, sd = FALSE)Arguments
y | matrix or a vector; for matrices, |
int | probability interval (default is 95 percent) to calculate |
digits | number of significant digits toround values by. |
median | return median value? |
sd | return SD? |
Examples
mint(rnorm(100, 12, 5))Correlation between mu and tau in a baggr model
Description
Correlation between mu and tau in a baggr model
Usage
mutau_cor(bg, summary = FALSE, interval = 0.95)Arguments
bg | abaggr model where |
summary | logical; if TRUE returns summary statistics as explained below. |
interval | uncertainty interval width (numeric between 0 and 1), if summarising |
Value
a vector of values
Plotting method for baggr outputs
Description
Using genericplot() onbaggr output invokesbaggr_plot visual.See therein for customisation options. Note that plot output isggplot2 object.'
Usage
## S3 method for class 'baggr'plot(x, ...)Arguments
x | object of class |
... | optional arguments, see |
Value
ggplot2 object frombaggr_plot
Author(s)
Witold Wiecek
Plot method for baggr_compare models
Description
Allows plots that compare multiple baggr modelsthat were passed for comparison purposes to baggr compare orrun automatically by baggr_compare
Usage
## S3 method for class 'baggr_compare'plot( x, compare = x$compare, style = "areas", grid_models = FALSE, grid_parameters = TRUE, prob = x$prob, hyper = TRUE, transform = NULL, order = F, vline = FALSE, add_values = FALSE, values_digits = 2, values_size = 4, ...)Arguments
x | baggr_compare model to plot |
compare | When plotting, choose between comparison of |
style | What kind of plot to display (if |
grid_models | If |
grid_parameters | if |
prob | Width of uncertainty interval (defaults to 95%) |
hyper | Whether to plot pooled treatment effectin addition to group treatment effects when |
transform | a function (e.g. exp(), log())to apply to the values of group (and hyper, if hyper=TRUE)effects before plotting |
order | Whether to sort by median treatment effect by group.If yes, medians from the model with largest range of estimatesare used for sorting.If not, groups are shown alphabetically. |
vline | logical; show vertical line through 0 in the plot? |
add_values | logical; if TRUE, values will be printed next to the plot,in a style that's similar to what is done for forest plots |
values_digits | number of significant digits to use when printing values, |
values_size | size of font for the values, if |
... | ignored for now, may be used in the future |
Plotting method for results of baggr LOO analyses
Description
Plotting method for results of baggr LOO analyses
Usage
## S3 method for class 'baggr_cv'plot(x, y, ..., add_values = TRUE)Arguments
x | output fromloocv that has |
y | Unused, ignore |
... | Unused, ignore |
add_values | logical; if |
Value
ggplot2 plot in similar style tobaggr_compare default plots
plot quantiles
Description
Plot results for baggr quantile models. Displays results facetted per group.Results areggplot2 plots and can be modified.
Usage
plot_quantiles(fit, ncol, hline = TRUE)Arguments
fit | an object of class |
ncol | number of columns for the plot; defaults to half of number of groups |
hline | logical; plots a line through 0 |
Value
ggplot2 object
Examples
## Not run: bg <- baggr(microcredit_simplified, model = "quantiles", quantiles = c(0.25, 0.50, 0.75), iter = 1000, refresh = 0, outcome = "consumption")#vanilla plotplot_quantiles(bg)[[1]]plot_quantiles(bg, hline = TRUE)[[2]] + ggplot2::coord_cartesian(ylim = c(-2, 5)) + ggplot2::ggtitle("Works like a ggplot2 plot!")## End(Not run)Pooling metrics and related statistics for baggr
Description
Compute statistics relating topooling in a givenbaggr meta-analysis model returns statistics, foreither the entire model or individual groups, such aspooling statistic by Gelman & Pardoe (2006),I-squared,H-squared, or study weights;heterogeneity is a shorthand forpooling(type = "total")weights is shorthand forpooling(metric = "weights")
Usage
pooling( bg, metric = c("pooling", "isq", "hsq", "weights"), type = c("groups", "total"), summary = TRUE)heterogeneity( bg, metric = c("pooling", "isq", "hsq", "weights"), summary = TRUE)## S3 method for class 'baggr'weights(object, ...)Arguments
bg | abaggr model |
metric |
|
type | In |
summary | logical; if |
object | baggr model for which to calculate group (study) weights |
... | Unused, please ignore. |
Details
Pooling statistic (Gelman & Pardoe, 2006) describes the extent to whichgroup-level estimates of treatmenteffect are "pooled" toward average treatment effect in the meta-analysis model.Ifpooling = "none" or"full" (which you specify when callingbaggr),then the values are always 0 or 1, respectively.Ifpooling = "partial", the value is somewhere between 0 and 1.We can distinguish between pooling of individual groups and overall pooling inthe model.
In many contexts, i.e. medical statistics, it is typical to report1-P, calledI^2(see Higgins and Thompson, 2002; sometimes another statistic,H^2 = 1 / P,is used).Higher values ofI-squared indicate higher heterogeneity;Von Hippel (2015) provides useful details forI-squared calculations (and someissues related to it, especially in frequentist models).See Gelman & Pardoe (2006) Section 1.1 for a short explanation of howR^2statistic relates to the pooling metric.
Group pooling
This is the calculation done bypooling() iftype = "groups" (default).In a partial pooling model (seebaggr and above), groupk (e.g. study) hasstandard error of treatment effect estimate,se_k.The treatment effect (acrossk groups) is variable across groups, withhyper-SD parameter\sigma_(\tau).
The quantity of interest is ratio of variation in treatment effects to thetotal variation.By convention, we subtract it from 1, to obtain apooling metricP.
p = 1 - (\sigma_(\tau)^2 / (\sigma_(\tau)^2 + se_k^2))
If
p < 0.5, the variation across studies is higher than variation within studies.Values close to 1 indicate nearly full pooling. Variation across studies dominates.
Values close to 0 indicate no pooling. Variation within studies dominates.
Note that, since\sigma_{\tau}^2 is a Bayesian parameter (rather than asingle fixed value),P is also a parameter. It is typical forP to have very high dispersion,as in many cases wecannot precisely estimate\sigma_{\tau}. To obtain samples from the distributionofP (rather than summarised values), setsummary=FALSE.
Study weights
Contributions of each group (e.g. each study) to the mean meta-analysis estimatecan be calculated by calculating for each studyw_k the inverse of sum of group-specificSE squared and between-study variation.To obtain weights, this vector (across all studies) has to be normalised to 1, i.e.w_k/sum(w_k) for eachk.
SE is typically treated as a fixed quantity(and usually reported on the reported point estimate),but between-study variance is a model parameter,hence the weights themselves are also random variables.
Overall pooling in the model
Typically researchers want to report a single measure from the model,relating to heterogeneity across groups.This is calculated by eitherpooling(mymodel, type = "total") or simplyheterogeneity(mymodel)
Formulae for the calculations below are provided in main package vignette andalmost analogous to the group calculation above, but using mean variance acrossall studies. In other words, poolingP is simply ratio of the expected within-studyvariance term to total variance.
The typical study variance is calculated following Eqn. (1) and (9)in Higgins and Thompson (see References). We use this formulationto make our pooling and I^2 comparable with other meta-analysis implementations,but users should be aware that this is only one possibility for calculatingthat "typical" within-study variance.
Same as for group-specific estimates,P is a Bayesian parameter and itsdispersion can be high.
Value
Matrix with mean and intervals for chosen pooling metric,each row corresponding to one meta-analysis group.
References
Gelman, Andrew, and Iain Pardoe."Bayesian Measures of Explained Variance and Pooling in Multilevel (Hierarchical) Models."Technometrics 48, no. 2 (May 2006): 241-51.
Higgins, Julian P. T., and Simon G. Thompson."Quantifying Heterogeneity in a Meta-Analysis."Statistics in Medicine, vol. 21, no. 11, June 2002, pp. 1539-58.
Hippel, Paul T von. "The Heterogeneity Statistic I2 Can Be Biased in Small Meta-Analyses."BMC Medical Research Methodology 15 (April 14, 2015).
Convert individual- to summary-level data in meta-analyses
Description
Allows for one-way conversion from full to summary dataor for calculation of effects for binary data.Usually used before callingbaggr.Input must be pre-formatted appropriately.
Usage
prepare_ma( data, effect = c("mean", "logOR", "logRR", "RD"), rare_event_correction = 0.25, correction_type = c("single", "all"), log = FALSE, cfb = FALSE, summarise = TRUE, treatment = "treatment", baseline = NULL, group = "group", outcome = "outcome", pooling = FALSE)Arguments
data | either a data.frame of individual-level observationswith columns for outcome (numeric), treatment (values 0 and 1) andgroup (numeric, character or factor);or, a data frame with binary data(must have columns |
effect | what effect to calculate? a |
rare_event_correction | This correction is used when working withbinary data (effect |
correction_type | If |
log | logical; log-transform the outcome variable? |
cfb | logical; calculate change from baseline? If yes, the outcomevariable is taken as a difference between values in |
summarise | logical; |
treatment | name of column with treatment variable; can be binary ora factor (if using multiple treatment columns) |
baseline | name of column with baseline variable |
group | name of the column with grouping variable |
outcome | name of column with outcome variable |
pooling | Internal use only, please ignore |
Details
The conversions done by this function are not typically needed and may happen automaticallywhendata is given tobaggr. However, this function can be used to explicitlyconvert from full to reduced (summarised) data without analysing it in any model.It can be useful for examining your data and generating summary tables.
If multiple operations are performed, they are taken in this order:
conversion to log scale,
calculating change from baseline,
summarising data (using appropriate
effect)
Value
If you
summarise: a data.frame with columns forgroup,tauandse.tau(foreffect = "mean", also baseline means; for"logRR"or"logOR"alsoa,b,c,d, which correspond to typical contingency table notation, that is:a= events in exposed;b= no events in exposed,c= events in unexposed,d= no events in unexposed).If you do not summarise data, individual level data will be returned, butsome columns may be renamed or transformed (see the arguments above).
Author(s)
Witold Wiecek
See Also
convert_inputs for how any type of data is (internally) converted intoa list of Stan inputs; vignettebaggr_binary for more details aboutrare event corrections
Examples
# Example of working with binary outcomes data# Make up some individual-level data first:df_rare <- data.frame(group = paste("Study", LETTERS[1:5]), a = c(0, 2, 1, 3, 1), c = c(2, 2, 3, 3, 5), n1i = c(120, 300, 110, 250, 95), n2i = c(120, 300, 110, 250, 95))df_rare_ind <- binary_to_individual(df_rare)# Calculate ORs; default rare event correction will be appliedprepare_ma(df_rare_ind, effect = "logOR")# Add 0.5 to all rowsprepare_ma(df_rare_ind, effect = "logOR", correction_type = "all", rare_event_correction = 0.5)Prepare prior values for Stan models in baggr
Description
This is an internal function called bybaggr. You can use it for debuggingor to run modified models.It extracts and prepares priors passed by the user.Then, if any necessary priors are missing, it sets them automaticallyand notifies user about these automatic choices.
Usage
prepare_prior( prior, data, stan_data, model, pooling, covariates, quantiles = c(), silent = FALSE)Arguments
prior |
|
data |
|
stan_data | list of inputs that will be used by samplerthis is already pre-obtained throughconvert_inputs |
model | same as inbaggr |
pooling | same as inbaggr |
covariates | same as inbaggr |
quantiles | same as inbaggr |
silent | same as inbaggr |
Value
A named list with prior values that can be appended tostan_dataand passed to a Stan model.
S3 print method for objects of classbaggr (model fits)
Description
This prints a concise summary of the mainbaggr model features.More info is included in the summary of the model and its attributes.
Usage
## S3 method for class 'baggr'print(x, exponent = FALSE, digits = 2, prob = 0.95, group, fixed = TRUE, ...)Arguments
x | object of class |
exponent | if |
digits | Number of significant digits to print. |
prob | Width of uncertainty interval (defaults to 95%) |
group | logical; print group effects? If unspecified,they are printed only ifless than 20 groups are present |
fixed | logical: print fixed effects? |
... | currently unused by this package: further arguments passedto or from other methods ( |
Print method for baggr_compare models
Description
Print method for baggr_compare models
Usage
## S3 method for class 'baggr_compare'print(x, digits, ...)Arguments
x | baggr_compare model |
digits | number of significant digits for effect estimates |
... | other parameters passed to print |
Print baggr cv objects nicely
Description
Print baggr cv objects nicely
Usage
## S3 method for class 'baggr_cv'print(x, digits = 3, ...)Arguments
x |
|
digits | number of digits to print |
... | Unused, ignore |
Print baggr_cv comparisons
Description
Print baggr_cv comparisons
Usage
## S3 method for class 'compare_baggr_cv'print(x, digits = 3, ...)Arguments
x | baggr_cv comparison to print |
digits | number of digits to print |
... | additional arguments for s3 consistency |
Output a distribution as a string
Description
Used for printing nicely formatted outputs when reporting results etc.
Usage
print_dist(dist)Arguments
dist | distribution name, one ofpriors |
Value
Character string likenormal(0, 10^2).
Prior distributions in baggr
Description
This page provides a list of all available distributionsthat can be used to specify priors inbaggr(). These convenience functionsare designed to allow the user to write the priors in the most "natural" way whenimplementing them in baggr. Apart frompassing on the arguments, their only other role is to perform a rudimentary checkif the distribution is specified correctly.
Usage
multinormal(location, Sigma)lkj(shape, order = NULL)normal(location, scale)lognormal(mu, sigma)student_t(nu, mu, sigma)cauchy(location, scale)uniform(lower, upper)Arguments
location | Mean for normal and multivariate normal (in which case |
Sigma | Variance-covariance matrix for multivariate normal. |
shape | Shape parameter for LKJ |
order | Order of LKJ matrix (typically it does not need to be specified,as it is inferred directly in the model) |
scale | SD for Normal, scale for Cauchy |
mu | mean of ln(X) for lognormal or location for Student's generalised T |
sigma | SD of ln(X) for lognormal or scale for Student's generalised T |
nu | degrees of freedom for Student's generalised T |
lower | Lower bound for Uniform |
upper | Upper bound for Uniform |
Details
The prior choice inbaggr is done via distinct arguments for each type of prior,e.g.prior_hypermean, or a named list of several passed toprior.See the examples below.
Notation for priors is "plain-text", in that you can write the distributions asnormal(5,10),uniform(0,100) etc.
Different parameters admit different priors (seebaggr for explanations ofwhat the differentprior_ arguments do):
prior_hypermean,prior_control, andprior_betawill take"normal","uniform","lognormal", and"cauchy"input for scalars.For a vector hypermean (see"mutau"model), it will take any of thesearguments and apply them independently toeach component of the vector, or it can also take a"multinormal"argument(see the example below).prior_hypersd,prior_control_sd, andprior_sigmawill take"normal","uniform", and"lognormal"but negative parts of the distribution are truncatedprior_hypercorallows"lkj"input (see Lewandowskiet al.)
Author(s)
Witold Wiecek, Rachael Meager
References
Lewandowski, Daniel, Dorota Kurowicka, and Harry Joe."Generating Random Correlation Matrices Based on Vines and Extended Onion Method."Journal of Multivariate Analysis 100, no. 9 (October 1, 2009): 1989-2001.
Examples
# (these are not the recommended priors -- for syntax illustration only)# change the priors for 8 schools:baggr(schools, model = "rubin", pooling = "partial", prior_hypermean = normal(5,5), prior_hypersd = normal(0,20))# passing priors as a listcustom_priors <- list(hypercor = lkj(1), hypersd = normal(0,10), hypermean = multinormal(c(0,0),matrix(c(10,3,3,10),2,2)))microcredit_summary_data <- prepare_ma(microcredit, outcome = "consumption")baggr(microcredit_summary_data, model = "mutau", pooling = "partial", prior = custom_priors)Extract only random (treatment) effects from a baggr model
Description
This function is a shortcut forgroup_effects(random_only=TRUE, ...). Note that thisis different to cluster random effects in individual-level data: by random effects wemean the random component of the group-wide effect
Usage
random_effects(...)Arguments
... | arguments passed togroup_effects |
8 schools example
Description
A classic example of aggregate level continuous data in Bayesian hierarchical modelling.This dataframe contains a column of estimated treatment effects of an SAT prepprogram implemented in 8 different schools in the US, and a column of estimated standard errors.
Usage
schoolsFormat
An object of classdata.frame with 8 rows and 3 columns.
Details
See Gelman et al (1995), Chapter 5, for context and applied example.
References
Gelman, Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin.Bayesian Data Analysis. Taylor & Francis, 1995.
Add prior values to Stan input for baggr
Description
Add prior values to Stan input for baggr
Usage
set_prior_val(target, name, prior, p = 1, to_array = FALSE)Arguments
target | list object (Stan input) to which prior will be added |
name | prior name, like |
prior | one of prior distributions allowed by baggr likenormal |
p | number of repeats of the prior, i.e. when P i.i.d. priors are set forP dimensional parameter as in "mu & tau" type of model |
to_array | for some models where |
Plot single comparison ggplot inbaggr_compare style
Description
Plot single comparison ggplot inbaggr_compare style
Usage
single_comp_plot( df, title = "", legend = "top", ylab = "", grid = F, points = FALSE, add_values = FALSE, values_digits = 1, values_size = 4)Arguments
df | data.frame with columns |
title |
|
legend |
|
ylab | Y axis label |
grid | logical; if |
points | you can optionally specify a ( |
add_values | logical; if |
values_digits | number of significant digits to use when printing values, |
values_size | size of font for the values, if |
Value
aggplot2 object
Average treatment effects in a baggr model
Description
The most generaltreatment_effect displaysboth hypermean and hyperSD (as a list of length 2),whereashypermean andhypersd can be used as shorthands.
Usage
treatment_effect( bg, summary = FALSE, transform = NULL, interval = 0.95, message = TRUE)hypermean( bg, transform = NULL, interval = 0.95, message = FALSE, summary = TRUE)hypersd(bg, transform = NULL, interval = 0.95, message = FALSE, summary = TRUE)Arguments
bg | abaggr model |
summary | logical; if TRUE returns summary statistics as explained below. |
transform | a transformation to apply to the result, should be an R function;(this is commonly used when calling |
interval | uncertainty interval width (numeric between 0 and 1), if summarising |
message | logical; use to disable messages prompted by using withno pooling models |
Functions
treatment_effect(): A list with 2 vectors (corresponding to MCMC samples)tau(mean effect) andsigma_tau(SD). Ifsummary=TRUE,both vectors are summarised as mean and lower/upper bounds according tointervalhypermean(): The hypermean of abaggrmodel, shorthand fortreatment_effect(x, s=T)[[1]]hypersd(): The hyper-SD of abaggrmodel, shorthand fortreatment_effect(x, s=T)[[2]]
Yusuf et al: beta-blockers and heart attacks
Description
This replicates Table 6 from the famous Yusuf et al. (1985), removing one trial (Snow)that had NA observations only. The paper is notable for application of rare-eventcorrections, which we discuss more in package vignettebaggr_binary.
Usage
yusufFormat
An object of classdata.frame with 21 rows and 5 columns.
References
Yusuf, S., Peto, R., Lewis, J., Collins, R., & Sleight, P. (1985).Beta blockade during and after myocardial infarction:An overview of the randomized trials.Progress in Cardiovascular Disease, 27(5), 335–371.