One of the most common goals of a time series analysis is to use theobserved series to inform predictions for future observations. We willrefer to this task of predicting a sequence of\(M\) future observations as\(M\)-step-ahead prediction (\(M\)-SAP). Fortunately, once we have fit amodel and can sample from the posterior predictive distribution, it isstraightforward to generate predictions as far into the future as wewant. It is also straightforward to evaluate the\(M\)-SAP performance of a time series modelby comparing the predictions to the observed sequence of\(M\) future data points once they becomeavailable.
Unfortunately, we are often in the position of having to use a modelto inform decisionsbefore we can collect the futureobservations required for assessing the predictive performance. If wehave many competing models we may also need to first decide which of themodels (or which combination of the models) we should rely on forpredictions. In these situations the best we can do is to use methodsfor approximating the expected predictive performance of our modelsusing only the observations of the time series we already have.
If there were no time dependence in the data or if the focus is toassess the non-time-dependent part of the model, we could use methodslike leave-one-out cross-validation (LOO-CV). For a data set with\(N\) observations, we refit the model\(N\) times, each time leaving out one of the\(N\) observations and assessing howwell the model predicts the left-out observation. LOO-CV is veryexpensive computationally in most realistic settings, but the Paretosmoothed importance sampling (PSIS, Vehtari et al, 2017, 2024) algorithmprovided by theloo package allows for approximating exactLOO-CV with PSIS-LOO-CV. PSIS-LOO-CV requires only a single fit of thefull model and comes with diagnostics for assessing the validity of theapproximation.
With a time series we can do something similar to LOO-CV but, exceptin a few cases, it does not make sense to leave out observations one ata time because then we are allowing information from the future toinfluence predictions of the past (i.e., times\(t + 1, t+2, \ldots\) should not be used topredict for time\(t\)). To apply theidea of cross-validation to the\(M\)-SAP case, instead ofleave-one-out cross-validation we need some form ofleave-future-out cross-validation (LFO-CV). As we willdemonstrate in this case study, LFO-CV does not refer to one particularprediction task but rather to various possible cross-validationapproaches that all involve some form of prediction for new time seriesdata. Like exact LOO-CV, exact LFO-CV requires refitting the model manytimes to different subsets of the data, which is computationally verycostly for most nontrivial examples, in particular for Bayesian analyseswhere refitting the model means estimating a new posterior distributionrather than a point estimate.
Although PSIS-LOO-CV provides an efficient approximation to exactLOO-CV, until now there has not been an analogous approximation to exactLFO-CV that drastically reduces the computational burden while alsoproviding informative diagnostics about the quality of theapproximation. In this case study we present PSIS-LFO-CV, an algorithmthat typically only requires refitting the time-series model a smallnumber times and will make LFO-CV tractable for many more realisticapplications than previously possible.
More details can be found in our paper about approximate LFO-CV(Bürkner, Gabry, & Vehtari, 2020), which is available as a preprinton arXiv (https://arxiv.org/abs/1902.06281).
Assume we have a time series of observations\(y = (y_1, y_2, \ldots, y_N)\) and let\(L\) be theminimum number ofobservations from the series that we will require before makingpredictions for future data. Depending on the application and howinformative the data is, it may not be possible to make reasonablepredictions for\(y_{i+1}\) based on\((y_1, \dots, y_{i})\) until\(i\) is large enough so that we can learnenough about the time series to predict future observations. Setting\(L=10\), for example, means that wewill only assess predictive performance starting with observation\(y_{11}\), so that we always have at least10 previous observations to condition on.
In order to assess\(M\)-SAPperformance we would like to compute the predictive densities
\[p(y_{i+1:M} \,|\, y_{1:i}) = p(y_{i+1}, \ldots, y_{i + M} \,|\, y_{1},...,y_{i})\]
for each\(i \in \{L, \ldots, N -M\}\). The quantities\(p(y_{i+1:M}\,|\, y_{1:i})\) can be computed with the help of the posteriordistribution\(p(\theta \,|\,y_{1:i})\) of the parameters\(\theta\) conditional on only the first\(i\) observations of thetime-series:
\[p(y_{i+1:M} \,| \, y_{1:i}) = \int p(y_{i+1:M} \,| \, y_{1:i}, \theta) \, p(\theta\,|\,y_{1:i})\,d\theta.\]
Having obtained\(S\) draws\((\theta_{1:i}^{(1)}, \ldots,\theta_{1:i}^{(S)})\) from the posterior distribution\(p(\theta\,|\,y_{1:i})\), we can estimate\(p(y_{i+1:M} | y_{1:i})\) as
\[p(y_{i+1:M} \,|\, y_{1:i}) \approx \frac{1}{S}\sum_{s=1}^S p(y_{i+1:M}\,|\, y_{1:i}, \theta_{1:i}^{(s)}).\]
Unfortunately, the math above makes use of the posteriordistributions from many different fits of the model to different subsetsof the data. That is, to obtain the predictive density\(p(y_{i+1:M} \,|\, y_{1:i})\) requiresfitting a model to only the first\(i\)data points, and we will need to do this for every value of\(i\) under consideration (all\(i \in \{L, \ldots, N - M\}\)).
To reduce the number of models that need to be fit for the purpose ofobtaining each of the densities\(p(y_{i+1:M}\,|\, y_{1:i})\), we propose the following algorithm. First, werefit the model using the first\(L\)observations of the time series and then perform a single exact\(M\)-step-ahead prediction step for\(p(y_{L+1:M} \,|\, y_{1:L})\). Recall that\(L\) is the minimum number ofobservations we have deemed acceptable for making predictions (setting\(L=0\) means the first data point willbe predicted only based on the prior). We define\(i^\star = L\) as the current point ofrefit. Next, starting with\(i = i^\star +1\), we approximate each\(p(y_{i+1:M}\,|\, y_{1:i})\) via
\[p(y_{i+1:M} \,|\, y_{1:i}) \approx \frac{ \sum_{s=1}^S w_i^{(s)}\, p(y_{i+1:M} \,|\, y_{1:i},\theta^{(s)})} { \sum_{s=1}^S w_i^{(s)}},\]
where\(\theta^{(s)} =\theta^{(s)}_{1:i^\star}\) are draws from the posteriordistribution based on the first\(i^\star\) observations and\(w_i^{(s)}\) are the PSIS weights obtainedin two steps. First, we compute the raw importance ratios
\[r_i^{(s)} =\frac{f_{1:i}(\theta^{(s)})}{f_{1:i^\star}(\theta^{(s)})}\propto \prod_{j \in (i^\star + 1):i} p(y_j \,|\, y_{1:(j-1)},\theta^{(s)}),\]
and then stabilize them using PSIS. The function\(f_{1:i}\) denotes the posteriordistribution based on the first\(i\)observations, that is,\(f_{1:i} = p(\theta\,|\, y_{1:i})\), with\(f_{1:i^\star}\) defined analogously. Theindex set\((i^\star + 1):i\) indicatesall observations which are part of the data for the model\(f_{1:i}\) whose predictive performance weare trying to approximate but not for the actually fitted model\(f_{1:i^\star}\). The proportional statementarises from the fact that we ignore the normalizing constants\(p(y_{1:i})\) and\(p(y_{1:i^\star})\) of the comparedposteriors, which leads to a self-normalized variant of PSIS (seeVehtari et al, 2017).
Continuing with the next observation, we gradually increase\(i\) by\(1\) (we move forward in time) and repeatthe process. At some observation\(i\),the variability of the importance ratios\(r_i^{(s)}\) will become too large andimportance sampling will fail. We will refer to this particular value of\(i\) as\(i^\star_1\). To identify the value of\(i^\star_1\), we check for which value of\(i\) does the estimated shapeparameter\(k\) of the generalizedPareto distribution first cross a certain threshold\(\tau\) (Vehtari et al, 2024). Only then dowe refit the model using the observations up to\(i^\star_1\) and restart the process fromthere by setting\(\theta^{(s)} =\theta^{(s)}_{1:i^\star_1}\) and\(i^\star = i^\star_1\) until the nextrefit.
In some cases we may only need to refit once and in other cases wewill find a value\(i^\star_2\) thatrequires a second refitting, maybe an\(i^\star_3\) that requires a thirdrefitting, and so on. We refit as many times as is required (only when\(k > \tau\)) until we arrive atobservation\(i = N - M\). For LOO,assuming posterior sample size is 4000 or larger, we recommend to use athreshold of\(\tau = 0.7\) (Vehtari etal, 2017, 2024) and it turns out this is a reasonable threshold for LFOas well (Bürkner et al. 2020).
Autoregressive (AR) models are some of the most commonly usedtime-series models. An AR(p) model —an autoregressive model of order\(p\)— can be defined as
\[y_i = \eta_i + \sum_{k = 1}^p \varphi_k y_{i - k} + \varepsilon_i,\]
where\(\eta_i\) is the linearpredictor for the\(i\)th observation,\(\phi_k\) are the autoregressiveparameters and\(\varepsilon_i\) arepairwise independent errors, which are usually assumed to be normallydistributed with equal variance\(\sigma^2\). The model implies a recursiveformula that allows for computing the right-hand side of the aboveequation for observation\(i\) based onthe values of the equations for previous observations.
To illustrate the application of PSIS-LFO-CV for estimating expected\(M\)-SAP performance, we will fit amodel for 98 annual measurements of the water level (in feet) ofLake Huron from theyears 1875–1972. This data set is found in thedatasetsR package, which is installed automatically withR.
In addition to theloo package, for this analysis wewill use thebrms interface to Stan to generate a Stanprogram and fit the model, and also thebayesplot andggplot2 packages for plotting.
library("brms")library("loo")library("bayesplot")library("ggplot2")color_scheme_set("brightblue")theme_set(theme_default())CHAINS<-4SEED<-5838296set.seed(SEED)Before fitting a model, we will first put the data into a data frameand then look at the time series.
N<-length(LakeHuron)df<-data.frame(y =as.numeric(LakeHuron),year =as.numeric(time(LakeHuron)),time =1:N)ggplot(df,aes(x = year,y = y))+geom_point(size =1)+labs(y ="Water Level (ft)",x ="Year",title ="Water Level in Lake Huron (1875-1972)" )The above plot shows rather strong autocorrelation of the time-seriesas well as some trend towards lower levels for later points in time.
We can specify an AR(4) model for these data using thebrms package as follows:
fit<-brm( y~ar(time,p =4),data = df,prior =prior(normal(0,0.5),class ="ar"),control =list(adapt_delta =0.99),seed = SEED,chains = CHAINS)The model implied predictions along with the observed values can beplotted, which reveals a rather good fit to the data.
preds<-posterior_predict(fit)preds<-cbind(Estimate =colMeans(preds),Q5 =apply(preds,2, quantile,probs =0.05),Q95 =apply(preds,2, quantile,probs =0.95))ggplot(cbind(df, preds),aes(x = year,y = Estimate))+geom_smooth(aes(ymin = Q5,ymax = Q95),stat ="identity",linewidth =0.5)+geom_point(aes(y = y))+labs(y ="Water Level (ft)",x ="Year",title ="Water Level in Lake Huron (1875-1972)",subtitle ="Mean (blue) and 90% predictive intervals (gray) vs. observed data (black)" )To allow for reasonable predictions of future values, we will requireat least\(L = 20\) historicalobservations (20 years) to make predictions.
We first perform approximate leave-one-out cross-validation (LOO-CV)for the purpose of later comparison with exact and approximate LFO-CVfor the 1-SAP case.
Computed from 4000 by 78 log-likelihood matrix. Estimate SEelpd_loo -88.6 6.4p_loo 4.8 1.0looic 177.2 12.8------MCSE of elpd_loo is 0.0.MCSE and ESS estimates assume independent draws (r_eff=1).All Pareto k estimates are good (k < 0.7).See help('pareto-k-diagnostic') for details.The most basic version of\(M\)-SAPis 1-SAP, in which we predict only one step ahead. In this case,\(y_{i+1:M}\) simplifies to\(y_{i}\) and the LFO-CV algorithm becomesconsiderably simpler than for larger values of\(M\).
Before we compute approximate LFO-CV using PSIS we will first computeexact LFO-CV for the 1-SAP case so we can use it as a benchmark later.The initial step for the exact computation is to calculate thelog-predictive densities by refitting the model many times:
loglik_exact<-matrix(nrow =ndraws(fit),ncol = N)for (iin L:(N-1)) { past<-1:i oos<- i+1 df_past<- df[past, , drop=FALSE] df_oos<- df[c(past, oos), , drop=FALSE] fit_i<-update(fit,newdata = df_past,recompile =FALSE) loglik_exact[, i+1]<-log_lik(fit_i,newdata = df_oos,oos = oos)[, oos]}Then we compute the exact expected log predictive density (ELPD):
# some helper functions we'll use throughout# more stable than log(sum(exp(x)))log_sum_exp<-function(x) { max_x<-max(x) max_x+log(sum(exp(x- max_x)))}# more stable than log(mean(exp(x)))log_mean_exp<-function(x) {log_sum_exp(x)-log(length(x))}# compute log of raw importance ratios# sums over observations *not* over posterior samplessum_log_ratios<-function(loglik,ids =NULL) {if (!is.null(ids)) loglik<- loglik[, ids, drop=FALSE]rowSums(loglik)}# for printing comparisons laterrbind_print<-function(...) {round(rbind(...),digits =2)}exact_elpds_1sap<-apply(loglik_exact,2, log_mean_exp)exact_elpd_1sap<-c(ELPD =sum(exact_elpds_1sap[-(1:L)]))rbind_print("LOO"= loo_cv$estimates["elpd_loo","Estimate"],"LFO"= exact_elpd_1sap) ELPDLOO -88.61LFO -92.49We see that the ELPD from LFO-CV for 1-step-ahead predictions islower than the ELPD estimate from LOO-CV, which should be expected sinceLOO-CV is making use of more of the time series. That is, since theLFO-CV approach only uses observations from before the left-out datapoint but LOO-CV usesall data points other than the left-outobservation, we should expect to see the larger ELPD from LOO-CV.
We compute approximate 1-SAP with refit at observations where thePareto\(k\) estimate exceeds thethreshold of\(0.7\).
The code becomes a little bit more involved as compared to the exactLFO-CV. Note that we can compute exact 1-SAP at the refitting points,which comes with no additional computational costs since we had to refitthe model anyway.
approx_elpds_1sap<-rep(NA, N)# initialize the process for i = Lpast<-1:Loos<- L+1df_past<- df[past, , drop=FALSE]df_oos<- df[c(past, oos), , drop=FALSE]fit_past<-update(fit,newdata = df_past,recompile =FALSE)loglik<-log_lik(fit_past,newdata = df_oos,oos = oos)approx_elpds_1sap[L+1]<-log_mean_exp(loglik[, oos])# iterate over i > Li_refit<- Lrefits<- Lks<-NULLfor (iin (L+1):(N-1)) { past<-1:i oos<- i+1 df_past<- df[past, , drop=FALSE] df_oos<- df[c(past, oos), , drop=FALSE] loglik<-log_lik(fit_past,newdata = df_oos,oos = oos) logratio<-sum_log_ratios(loglik, (i_refit+1):i) psis_obj<-suppressWarnings(psis(logratio)) k<-pareto_k_values(psis_obj) ks<-c(ks, k)if (k> k_thres) {# refit the model based on the first i observations i_refit<- i refits<-c(refits, i) fit_past<-update(fit_past,newdata = df_past,recompile =FALSE) loglik<-log_lik(fit_past,newdata = df_oos,oos = oos) approx_elpds_1sap[i+1]<-log_mean_exp(loglik[, oos]) }else { lw<-weights(psis_obj,normalize =TRUE)[,1] approx_elpds_1sap[i+1]<-log_sum_exp(lw+ loglik[, oos]) }}We see that the final Pareto-\(k\)-estimates are mostly well below thethreshold and that we only needed to refit the model a few times:
plot_ks<-function(ks, ids,thres =0.6) { dat_ks<-data.frame(ks = ks,ids = ids)ggplot(dat_ks,aes(x = ids,y = ks))+geom_point(aes(color = ks> thres),shape =3,show.legend =FALSE)+geom_hline(yintercept = thres,linetype =2,color ="red2")+scale_color_manual(values =c("cornflowerblue","darkblue"))+labs(x ="Data point",y ="Pareto k")+ylim(-0.5,1.5)}cat("Using threshold ", k_thres,", model was refit ",length(refits)," times, at observations", refits)Using threshold 0.7 , model was refit 3 times, at observations 20 43 84The approximate 1-SAP ELPD is remarkably similar to the exact 1-SAPELPD computed above, which indicates our algorithm to computeapproximate 1-SAP worked well for the present data and model.
approx_elpd_1sap<-sum(approx_elpds_1sap,na.rm =TRUE)rbind_print("approx LFO"= approx_elpd_1sap,"exact LFO"= exact_elpd_1sap) ELPDapprox LFO -92.88exact LFO -92.49Plotting exact against approximate predictions, we see that noapproximation value deviates far from its exact counterpart, providingfurther evidence for the good quality of our approximation.
dat_elpd<-data.frame(approx_elpd = approx_elpds_1sap,exact_elpd = exact_elpds_1sap)ggplot(dat_elpd,aes(x = approx_elpd,y = exact_elpd))+geom_abline(color ="gray30")+geom_point(size =2)+labs(x ="Approximate ELPDs",y ="Exact ELPDs")We can also look at the maximum difference and average differencebetween the approximate and exact ELPD calculations, which also indicatea ver close approximation:
max_diff<-with(dat_elpd,max(abs(approx_elpd- exact_elpd),na.rm =TRUE))mean_diff<-with(dat_elpd,mean(abs(approx_elpd- exact_elpd),na.rm =TRUE))rbind_print("Max diff"=round(max_diff,2),"Mean diff"=round(mean_diff,3)) [,1]Max diff 0.14Mean diff 0.01To illustrate the application of\(M\)-SAP for\(M> 1\), we next compute exact and approximate LFO-CV for the4-SAP case.
The necessary steps are the same as for 1-SAP with the exception thatthe log-density values of interest are now the sums of the logpredictive densities of four consecutive observations. Further, thestability of the PSIS approximation actually stays the same for all\(M\) as it only depends on the numberof observations we leave out, not on the number of observations wepredict.
M<-4loglikm<-matrix(nrow =ndraws(fit),ncol = N)for (iin L:(N- M)) { past<-1:i oos<- (i+1):(i+ M) df_past<- df[past, , drop=FALSE] df_oos<- df[c(past, oos), , drop=FALSE] fit_past<-update(fit,newdata = df_past,recompile =FALSE) loglik<-log_lik(fit_past,newdata = df_oos,oos = oos) loglikm[, i+1]<-rowSums(loglik[, oos])}exact_elpds_4sap<-apply(loglikm,2, log_mean_exp)(exact_elpd_4sap<-c(ELPD =sum(exact_elpds_4sap,na.rm =TRUE))) ELPD -405.4043Computing the approximate PSIS-LFO-CV for the 4-SAP case is a littlebit more involved than the approximate version for the 1-SAP case,although the underlying principles remain the same.
approx_elpds_4sap<-rep(NA, N)# initialize the process for i = Lpast<-1:Loos<- (L+1):(L+ M)df_past<- df[past, , drop=FALSE]df_oos<- df[c(past, oos), , drop=FALSE]fit_past<-update(fit,newdata = df_past,recompile =FALSE)loglik<-log_lik(fit_past,newdata = df_oos,oos = oos)loglikm<-rowSums(loglik[, oos])approx_elpds_4sap[L+1]<-log_mean_exp(loglikm)# iterate over i > Li_refit<- Lrefits<- Lks<-NULLfor (iin (L+1):(N- M)) { past<-1:i oos<- (i+1):(i+ M) df_past<- df[past, , drop=FALSE] df_oos<- df[c(past, oos), , drop=FALSE] loglik<-log_lik(fit_past,newdata = df_oos,oos = oos) logratio<-sum_log_ratios(loglik, (i_refit+1):i) psis_obj<-suppressWarnings(psis(logratio)) k<-pareto_k_values(psis_obj) ks<-c(ks, k)if (k> k_thres) {# refit the model based on the first i observations i_refit<- i refits<-c(refits, i) fit_past<-update(fit_past,newdata = df_past,recompile =FALSE) loglik<-log_lik(fit_past,newdata = df_oos,oos = oos) loglikm<-rowSums(loglik[, oos]) approx_elpds_4sap[i+1]<-log_mean_exp(loglikm) }else { lw<-weights(psis_obj,normalize =TRUE)[,1] loglikm<-rowSums(loglik[, oos]) approx_elpds_4sap[i+1]<-log_sum_exp(lw+ loglikm) }}Again, we see that the final Pareto-\(k\)-estimates are mostly well below thethreshold and that we only needed to refit the model a few times:
cat("Using threshold ", k_thres,", model was refit ",length(refits)," times, at observations", refits)Using threshold 0.7 , model was refit 3 times, at observations 20 46 82The approximate ELPD computed for the 4-SAP case is not as close toits exact counterpart as in the 1-SAP case. In general, the larger\(M\), the larger the variation of theapproximate ELPD around the exact ELPD. It turns out that the ELPDestimates of AR-models with\(M>1\)show particular variation due to their predictions’ dependency on otherpredicted values. In Bürkner et al. (2020) we provide furtherexplanation and simulations for these cases.
approx_elpd_4sap<-sum(approx_elpds_4sap,na.rm =TRUE)rbind_print("Approx LFO"= approx_elpd_4sap,"Exact LFO"= exact_elpd_4sap) ELPDApprox LFO -404.31Exact LFO -405.40Plotting exact against approximate pointwise predictions confirmsthat, for a few specific data points, the approximate predictionsunderestimate the exact predictions.
dat_elpd_4sap<-data.frame(approx_elpd = approx_elpds_4sap,exact_elpd = exact_elpds_4sap)ggplot(dat_elpd_4sap,aes(x = approx_elpd,y = exact_elpd))+geom_abline(color ="gray30")+geom_point(size =2)+labs(x ="Approximate ELPDs",y ="Exact ELPDs")In this case study we have shown how to do carry out exact andapproximate leave-future-out cross-validation for\(M\)-step-ahead prediction tasks. For thedata and model used in our example, the PSIS-LFO-CV algorithm providesreasonably stable and accurate results despite not requiring us to refitthe model nearly as many times. For more details on approximate LFO-CV,we refer to Bürkner et al. (2020).
Bürkner P. C., Gabry J., & Vehtari A. (2020). Approximateleave-future-out cross-validation for time series models.Journal ofStatistical Computation and Simulation, 90(14):2499-2523.:/10.1080/00949655.2020.1783262.Online.arXiv preprint.
Vehtari A., Gelman A., & Gabry J. (2017). Practical Bayesianmodel evaluation using leave-one-out cross-validation and WAIC.Statistics and Computing, 27(5), 1413–1432.:10.1007/s11222-016-9696-4.Online.arXiv preprintarXiv:1507.04544.
Vehtari, A., Simpson, D., Gelman, A., Yao, Y., and Gabry, J. (2024).Pareto smoothed importance sampling.Journal of Machine LearningResearch, 25(72):1-58.PDF
R version 4.4.1 (2024-06-14)Platform: x86_64-apple-darwin20Running under: macOS Sonoma 14.4.1Matrix products: defaultBLAS: /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRblas.0.dylib LAPACK: /Library/Frameworks/R.framework/Versions/4.4-x86_64/Resources/lib/libRlapack.dylib; LAPACK version 3.12.0locale:[1] C/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8time zone: America/Denvertzcode source: internalattached base packages:[1] stats graphics grDevices utils datasets methods base other attached packages:[1] ggplot2_3.5.1 brms_2.21.0 bayesplot_1.11.1 rstanarm_2.32.1 [5] Rcpp_1.0.12 loo_2.8.0 rstan_2.32.6 StanHeaders_2.32.9[9] knitr_1.47 loaded via a namespace (and not attached): [1] gridExtra_2.3 inline_0.3.19 rlang_1.1.4 [4] magrittr_2.0.3 matrixStats_1.3.0 compiler_4.4.1 [7] callr_3.7.6 vctrs_0.6.5 reshape2_1.4.4 [10] stringr_1.5.1 pkgconfig_2.0.3 fastmap_1.2.0 [13] backports_1.5.0 labeling_0.4.3 utf8_1.2.4 [16] threejs_0.3.3 promises_1.3.0 rmarkdown_2.27 [19] markdown_1.13 ps_1.7.6 nloptr_2.1.0 [22] xfun_0.45 cachem_1.1.0 jsonlite_1.8.8 [25] highr_0.11 later_1.3.2 parallel_4.4.1 [28] R6_2.5.1 dygraphs_1.1.1.6 bslib_0.7.0 [31] stringi_1.8.4 boot_1.3-30 estimability_1.5.1 [34] jquerylib_0.1.4 zoo_1.8-12 base64enc_0.1-3 [37] httpuv_1.6.15 Matrix_1.7-0 splines_4.4.1 [40] igraph_2.0.3 tidyselect_1.2.1 rstudioapi_0.16.0 [43] abind_1.4-5 yaml_2.3.8 codetools_0.2-20 [46] miniUI_0.1.1.1 processx_3.8.4 pkgbuild_1.4.4 [49] lattice_0.22-6 tibble_3.2.1 plyr_1.8.9 [52] shiny_1.8.1.1 withr_3.0.0 bridgesampling_1.1-2 [55] posterior_1.6.0 coda_0.19-4.1 evaluate_0.24.0 [58] survival_3.6-4 RcppParallel_5.1.7 xts_0.14.0 [61] pillar_1.9.0 tensorA_0.36.2.1 checkmate_2.3.1 [64] DT_0.33 stats4_4.4.1 shinyjs_2.1.0 [67] distributional_0.4.0 generics_0.1.3 rstantools_2.4.0 [70] munsell_0.5.1 scales_1.3.0 minqa_1.2.7 [73] gtools_3.9.5 xtable_1.8-4 glue_1.7.0 [76] emmeans_1.10.2 tools_4.4.1 shinystan_2.6.0 [79] lme4_1.1-35.4 colourpicker_1.3.0 mvtnorm_1.2-5 [82] grid_4.4.1 QuickJSR_1.2.2 crosstalk_1.2.1 [85] colorspace_2.1-0 nlme_3.1-164 cli_3.6.3 [88] fansi_1.0.6 Brobdingnag_1.2-9 dplyr_1.1.4 [91] gtable_0.3.5 sass_0.4.9 digest_0.6.36 [94] htmlwidgets_1.6.4 farver_2.1.2 htmltools_0.5.8.1 [97] lifecycle_1.0.4 mime_0.12 shinythemes_1.2.0 [100] MASS_7.3-60.2