| Type: | Package |
| Title: | Models of Decision Confidence and Measures of Metacognition |
| Version: | 0.2.1 |
| Date: | 2025-09-24 |
| Maintainer: | Manuel Rausch <manuel.rausch@ku.de> |
| Description: | Provides fitting functions and other tools for decision confidence and metacognition researchers, including meta-d'/d', often considered to be the gold standard to measure metacognitive efficiency, and information-theoretic measures of metacognition. Also allows to fit and compare several static models of decision making and confidence. |
| License: | GPL (≥ 3) |
| URL: | https://github.com/ManuelRausch/StatConfR |
| BugReports: | https://github.com/ManuelRausch/StatConfR/issues |
| Depends: | R (≥ 4.1.0) |
| Imports: | parallel, plyr, stats, utils, ggplot2, Rmisc |
| Date/Publication: | 2025-09-26 12:30:02 UTC |
| Encoding: | UTF-8 |
| LazyData: | true |
| NeedsCompilation: | no |
| Packaged: | 2025-09-26 11:19:20 UTC; PPA714 |
| Repository: | CRAN |
| RoxygenNote: | 7.3.2 |
| Author: | Manuel Rausch |
Data of 16 participants in a masked orientation discrimination experiment (Hellmann et al., 2023, Exp. 1)
Description
In each trial, participants were shown a sinusoidal grating oriented either horizontally or vertically,followed by a mask after varying stimulus-onset-asynchronies.Participants were instructed to report the orientation and their degree of confidence as accurately as possible
Usage
data(MaskOri)Format
A data.frame with 25920 rows representing different trials and 5 variables:
- participant
integer values as unique participant identifier
- stimulus
orientation of the grating (90: vertical, 0: horizontal)
- response
participants' orientation judgment about the grating (90: vertical, 0: horizontal)
- correct
0-1 column indicating whether the discrimination response was correct (1) or not (0)
- rating
0-4 confidence rating on a continous scale binned into five categories
- diffCond
stimulus-onset-asynchrony in ms (i.e. time between stimulus and mask onset)
- trialNo
Enumeration of trials per participant
References
Hellmann, S., Zehetleitner, M., & Rausch, M. (2023). Simultaneous modeling of choice, confidence, and response time in visual perception. Psychological Review. 130(6), 1521–1543. doi:10.1037/rev0000411
Examples
data(MaskOri)summary(MaskOri)Estimate Measures of Metacognition from Information Theory
Description
estimateMetaI estimates meta-I, an information-theoreticmeasure of metacognitive sensitivity proposed by Dayan (2023), as well assimilar derived measures, including meta-I_{1}^{r} and Meta-I_{2}^{r}.These are different normalizations of meta-I:
Meta-
I_{1}^{r}normalizes by the meta-Ithat would beexpected from an underlying normal distribution with the samesensitivity.Meta-
I_{1}^{r\prime}is a variant of meta-I_{1}^{r}not discussed by Dayan(2023) which normalizes by the meta-Ithat would be expected froman underlying normal distribution with the same accuracy (this issimilar to the sensitivity approach but without considering variablethresholds).Meta-
I_{2}^{r}normalizes by the maximum amount of meta-Iwhich would be reached if all uncertainty about the stimulus was removed.RMInormalizes meta-Iby the range of its possiblevalues and therefore scales between 0 and 1. RMI is a novel measure not discussed by Dayan (2023).
All measures can be calculated with a bias-reduced variant for which theobserved frequencies are taken as underlying probability distribution toestimate the sampling bias. The estimated bias is then subtracted from theinitial measures. This approach uses Monte-Carlo simulations and istherefore not deterministic (values can vary from one evaluation of thefunction to the next). However, this is a simple way to reduce the biasinherent in these measures.
Usage
estimateMetaI(data, bias_reduction = TRUE)Arguments
data | a
|
bias_reduction |
|
Details
It is assumed that a classifier (possibly a human being performing a discrimination task)or an algorithmic classifier in a classification application,makes a binary predictionR about a true state of the worldS and gives a confidence ratingC.Meta-I is defined as the mutual information between the confidence andaccuracy and is calculated as the transmitted information minus theminimal information given the accuracy,
meta-I = I(S; R, C) - I(S; R).
This is equivalent to Dayan's formulation where meta-I is the informationthat confidence transmits about the correctness of a response,
meta-I = I(S = R; C).
Meta-I is expressed in bits, i.e. the log base is 2).The other measures are different normalizations of meta-I and are unitless.It should be noted that Dayan (2023) pointed out that a liberal orconservative use of the confidence levels will affected the mutualinformation and thus influence meta-I.
Value
adata.frame with one row for each subject and the followingcolumns:
participantis the participant ID,meta_Iis the estimated meta-Ivalue (expressed in bits, i.e. log base is 2),meta_Ir1is meta-I_{1}^{r},meta_Ir1_accis meta-I_{1}^{r\prime},meta_Ir2is meta-I_{2}^{r}, andRMIis RMI.
Author(s)
Sascha Meyen,saschameyen@gmail.com
References
Dayan, P. (2023). Metacognitive Information Theory.Open Mind, 7, 392–411. doi:10.1162/opmi_a_00091
Examples
# 1. Select two subjects from the masked orientation discrimination experimentdata <- subset(MaskOri, participant %in% c(1:2))head(data)# 2. Calculate meta-I measures with bias reduction (this may take 10 s per subject)metaIMeasures <- estimateMetaI(data)# 3. Calculate meta-I measures for all participants without bias reduction (much faster)metaIMeasures <- estimateMetaI(MaskOri, bias_reduction = FALSE)metaIMeasuresFit a static confidence model to data
Description
ThefitConf function fits the parameters of one static model of decision confidence,provided by themodel argument, to binary choices and confidence judgments.See Details for the mathematical specification of the implemented models andtheir parameters.Parameters are fitted using a maximum likelihood estimation method with ainitial grid search to find promising starting values for the optimization.In addition, several measures of model fit (negative log-likelihood, BIC, AIC, and AICc)are computed, which can be used for a quantitative model evaluation.
Usage
fitConf(data, model = "SDT", nInits = 5, nRestart = 4)Arguments
data | a
|
model |
|
nInits |
|
nRestart |
|
Details
The fitting routine first performs a coarse grid search to find promisingstarting values for the maximum likelihood optimization procedure. Then the bestnInitsparameter sets found by the grid search are used as the initial values for separateruns of the Nelder-Mead algorithm implemented inoptim.Each run is restartednRestart times.
Mathematical description of models
The computational models are all based on signal detection theory (Green & Swets, 1966). It is assumedthat participants select a binary discrimination responseR about a stimulusS.BothS andR can be either -1 or 1.R is considered correct ifS=R.In addition, we assume that there areK different levels of stimulus discriminabilityin the experiment, i.e. a physical variable that makes the discrimination task easier or harder.For each level of discriminability, the function fits a different discriminationsensitivity parameterd_k. If there is more than one sensitivity parameter,we assume that the sensitivity parameters are ordered such as0 < d_1 < ... < d_K.The models assume that the stimulus generates normally distributed sensory evidencex with meanS\times d_k/2and variance of 1. The sensory evidencex is compared to a decisioncriterionc to generate a discrimination responseR, which is 1, ifx exceedsc and -1 else.To generate confidence, it is assumed that the confidence variabley is compared to anotherset of criteria\theta_{R,i}, i = 1, ..., L-1, depending on thediscrimination responseR to produce aL-step discrete confidence response.The number of thresholds will be inferred from the number of steps in therating column ofdata. Thus, the parameters shared between all models are:
sensitivity parameters
d_1,...,d_K(K: number of difficulty levels)decision criterion
cconfidence criterion
\theta_{-1,1},\theta_{-1,2},...,\theta_{-1,L-1},\theta_{1,1},\theta_{1,2},...,\theta_{1,L-1}(L: number of confidence categories available for confidence ratings)
How the confidence variabley is computed varies across the different models.The following models have been implemented so far:
Signal detection rating model (SDT)
According to SDT, the same sample of sensoryevidence is used to generate response and confidence, i.e.,y=x and the confidence criteria span from the left andright side of the decision criterionc (Green & Swets, 1966).
Gaussian noise model (GN)
According to the model,y is subject toadditive noise and assumed to be normally distributed around the decisionevidence valuex with a standard deviation\sigma (Maniscalco & Lau, 2016).The parameter\sigma is a free parameter.
Weighted evidence and visibility model (WEV)
WEV assumes that the observer combines evidence about decision-relevant featuresof the stimulus with the strength of evidence about choice-irrelevant featuresto generate confidence (Rausch et al., 2018). Here, we use the version of the WEV modelused by Rausch et al. (2023), which assumes thaty is normallydistributed with a mean of(1-w)\times x+w \times d_k\times R and standard deviation\sigma.The parameter\sigma quantifies the amount of unsystematic variabilitycontributing to confidence judgments but not to the discrimination judgments.The parameterw represents the weight that is put on the choice-irrelevantfeatures in the confidence judgment.w and\sigma are fitted inaddition to the set of shared parameters.
Post-decisional accumulation model (PDA)
PDA represents the idea of on-going information accumulation after thediscrimination choice (Rausch et al., 2018). The parameterb indicates the amount of additionalaccumulation. The confidence variable is normally distributed with meanx+S\times d_k\times b and varianceb.For this model the parameterb is fitted inaddition to the set of shared parameters.
Independent Gaussian model (IG)
According to IG,y is sampled independentlyfromx (Rausch & Zehetleitner, 2017).y is normally distributed with a mean ofa\times d_k and varianceof 1 (again as it would scale withm). The free parametermrepresents the amount of information available for confidence judgmentrelative to amount of evidence available for the discrimination decision and canbe smaller as well as greater than 1.
Independent truncated Gaussian model: HMetad-Version (ITGc)
According to the version of ITG consistentwith the HMetad-method (Fleming, 2017; see Rausch et al., 2023),y is sampled independentlyfromx from a truncated Gaussian distribution with a location parameterofS\times d_k \times m/2 and a scale parameter of 1. The Gaussian distribution ofyis truncated in a way that it is impossible to sample evidence that contradictsthe original decision: IfR = -1, the distribution is truncated to theright ofc. IfR = 1, the distribution is truncated to the leftofc. The additional parameterm represents metacognitive efficiency,i.e., the amount of information available for confidence judgments relative toamount of evidence available for discrimination decisions and can be smalleras well as greater than 1.
Independent truncated Gaussian model: Meta-d'-Version (ITGcm)
According to the version of the ITG consistentwith the original meta-d' method (Maniscalco & Lau, 2012, 2014; see Rausch et al., 2023),y is sampled independently fromx from a truncated Gaussian distribution with a location parameterofS\times d_k \times m/2 and a scale parameterof 1. IfR = -1, the distribution is truncated to the right ofm\times c.IfR = 1, the distribution is truncated to the left ofm\times c.The additional parameterm represents metacognitive efficiency, i.e.,the amount of information available for confidence judgments relative toamount of evidence available for the discrimination decision and can be smalleras well as greater than 1.
Lognormal noise model (logN)
According to logN, the same sampleof sensory evidence is used to generate response and confidence, i.e.,y=x just as in SDT (Shekhar & Rahnev, 2021). However, according to logN, the confidence criteriaare not assumed to be constant, but instead they are affected by noise drawn froma lognormal distribution. In each trial,\theta_{-1,i} is givenbyc - \epsilon_i. Likewise,\theta_{1,i} is given byc + \epsilon_i.\epsilon_i is drawn from a lognormal distribution withthe location parameter\mu_{R,i}=log(|\overline{\theta}_{R,i}- c|) - 0.5 \times \sigma^{2} andscale parameter\sigma.\sigma is a free parameter designed toquantify metacognitive ability. It is assumed that the criterion noise is perfectlycorrelated across confidence criteria, ensuring that the confidence criteriaare always perfectly ordered. Because\theta_{-1,1}, ...,\theta_{-1,L-1},\theta_{1,1}, ...,\theta_{1,L-1} change from trial to trial, they are not estimatedas free parameters. Instead, we estimate the means of the confidence criteria, i.e.,\overline{\theta}_{-1,1}, ...,\overline{\theta}_{-1,L-1}, \overline{\theta}_{1,1}, ... \overline{\theta}_{1,L-1},as free parameters.
Lognormal weighted evidence and visibility model (logWEV)
logWEV is a combination of logN and WEV proposed by Shekhar and Rahnev (2023).Conceptually, logWEV assumes that the observer combines evidence about decision-relevant featuresof the stimulus with the strength of evidence about choice-irrelevant features (Rausch et al., 2018).The model also assumes that noise affecting the confidence decision variable is lognormalin accordance with Shekhar and Rahnev (2021).According to logWEV, the confidence decision variabley is equal toy^*\times R.y^* is sampled from a lognormal distribution with a location parameterof(1-w)\times x\times R + w \times d_k and a scale parameter of\sigma.The parameter\sigma quantifies the amount of unsystematic variabilitycontributing to confidence judgments but not to the discrimination judgments.The parameterw represents the weight that is put on the choice-irrelevantfeatures in the confidence judgment.w and\sigma are fitted inaddition to the set of shared parameters.
Response-congruent evidence model (RCE)
The response-congruent evidence model represents the idea that observers useall available sensory information to make the discrimination decision, but for confidence judgements,they only consider evidence consistent with the selected decision and ignore evidence against the decision (Peters et al., 2017).The model assumes two separate samples of sensory evidence collected in each trial,each belonging to one possible identity of the stimulus.Both samples of sensory evidencex_{-1} andx_1 are sampled from Gaussian distributions with a standard deviations of\sqrt{1/2}.The mean ofx_{-1} is given by(1 - S) \times 0.25 \times d; the meanofx_1 is given by(1 + S) \times 0.25 \times d. The sensory evidenceused for the discrimination choice isx = x_2 - x_1,which implies that the process underlying the discrimination decision is equivalent to standard SDT.The confidence decision variable y isy = - x_1 if the response R is -1 andy = x_2 otherwise.
CASANDRE (CAS)
Generation of the primary choice in the CASANDRE model follows standard SDT assumptions.For confidence, the CASANDRE model assumes an additional stage of processing based on the observer’s estimate of theperceived reliability of their choices (Boundy-Singer et al., 2023).The confidence decision variable y is given byy = \frac{x}{\hat{\sigma}}.\hat{\sigma} represents a noisy internal estimate of the sensory noise.It is assumed that\hat{\sigma} is sampled from a lognormal distribution with a mean fixed to 1and a free noise parameter\sigma.Conceptually,\sigma represents the uncertainty in an individual's estimate of their own sensory uncertainty.
Value
Gives data frame with one row and one column for each of the fitted parameters of theselected model as well as additional information about the fit(negLogLik (negative log-likelihood of the final set of parameters),k (number of parameters),N (number of data rows),AIC (Akaike Information Criterion; Akaike, 1974),BIC (Bayes information criterion; Schwarz, 1978), andAICc (AIC corrected for small samples; Burnham & Anderson, 2002))
Author(s)
Sebastian Hellmann,sebastian.hellmann@tum.de
Manuel Rausch,manuel.rausch@ku.de
References
Akaike, H. (1974). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control, AC-19(6), 716–723.doi: 10.1007/978-1-4612-1694-0_16
Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. Springer.
Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. Neuroscience of Consciousness, 1, 1–14. doi: 10.1093/nc/nix007
Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. Wiley.
Maniscalco, B., & Lau, H. (2012). A signal detection theoretic method for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430.
Maniscalco, B., & Lau, H. C. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d’, Response- Specific Meta-d’, and the Unequal Variance SDT Model. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp. 25–66). Springer. doi: 10.1007/978-3-642-45190-4_3
Maniscalco, B., & Lau, H. (2016). The signal processing architecture underlying subjective reports of sensory awareness. Neuroscience of Consciousness, 1, 1–17. doi: 10.1093/nc/niw002
Rausch, M., Hellmann, S., & Zehetleitner, M. (2018). Confidence in masked orientation judgments is informed by both evidence and visibility. Attention, Perception, and Psychophysics, 80(1), 134–154. doi: 10.3758/s13414-017-1431-5
Rausch, M., Hellmann, S., & Zehetleitner, M. (2023). Measures of metacognitive efficiency across cognitive models of decision confidence. Psychological Methods. doi: 10.31234/osf.io/kdz34
Rausch, M., & Zehetleitner, M. (2017). Should metacognition be measured by logistic regression? Consciousness and Cognition, 49, 291–312. doi: 10.1016/j.concog.2017.02.007
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. doi: 10.1214/aos/1176344136
Shekhar, M., & Rahnev, D. (2021). The Nature of Metacognitive Inefficiency in Perceptual Decision Making. Psychological Review, 128(1), 45–70. doi: 10.1037/rev0000249
Shekhar, M., & Rahnev, D. (2023). How Do Humans Give Confidence? A Comprehensive Comparison of Process Models of Perceptual Metacognition. Journal of Experimental Psychology: General. doi:10.1037/xge0001524
Peters, M. A. K., Thesen, T., Ko, Y. D., Maniscalco, B., Carlson, C., Davidson, M., Doyle, W., Kuzniecky, R., Devinsky, O., Halgren, E., & Lau, H. (2017). Perceptual confidence neglects decision-incongruent evidence in the brain. Nature Human Behaviour, 1(0139), 1–21. doi:10.1038/s41562-017-0139
Boundy-Singer, Z. M., Ziemba, C. M., & Goris, R. L. T. (2022). Confidence reflects a noisy decision reliability estimate. Nature Human Behaviour, 7(1), 142–154. doi:10.1038/s41562-022-01464-x
Examples
# 1. Select one subject from the masked orientation discrimination experimentdata <- subset(MaskOri, participant == 1)head(data)# 2. Use fitting function # Fitting takes some time (about 10 minutes on an 2.8GHz processor) to run: FitFirstSbjWEV <- fitConf(data, model="WEV")Fit several static confidence models to multiple participants
Description
ThefitConfModels function fits the parameters of several computational models of decisionconfidence, in binary choice tasks, specified in themodel argument, todifferent subsets of one data frame, indicated by different values in the columnparticipant of thedata argument.fitConfModels is a wrapper of the functionfitConf and callsfitConf for every possible combinationof model in themodels argument and sub-data frame ofdata for each valuein theparticipant column.See Details for more information about the parameters.Parameters are fitted using a maximum likelihood estimation method with ainitial grid search to find promising starting values for the optimization.In addition, several measures of model fit (negative log-likelihood, BIC, AIC, and AICc)are computed, which can be used for a quantitative model evaluation.
Usage
fitConfModels(data, models = "all", nInits = 5, nRestart = 4, .parallel = FALSE, n.cores = NULL)Arguments
data | a
|
models |
|
nInits |
|
nRestart |
|
.parallel |
|
n.cores |
|
Details
The provideddata argument is split into subsets according to the values oftheparticipant column. Then for each subset and each model in themodelsargument, the parameters of the respective model are fitted to the data subset.
The fitting routine first performs a coarse grid search to find promisingstarting values for the maximum likelihood optimization procedure. Then the bestnInitsparameter sets found by the grid search are used as the initial values for separateruns of the Nelder-Mead algorithm implemented inoptim.Each run is restartednRestart times.
Mathematical description of models
The computational models are all based on signal detection theory (Green & Swets, 1966). It is assumedthat participants select a binary discrimination responseR about a stimulusS.BothS andR can be either -1 or 1.R is considered correct ifS=R.In addition, we assume that there areK different levels of stimulus discriminabilityin the experiment, i.e. a physical variable that makes the discrimination task easier or harder.For each level of discriminability, the function fits a different discriminationsensitivity parameterd_k. If there is more than one sensitivity parameter,we assume that the sensitivity parameters are ordered such as0 < d_1 < d_2 < ... < d_K.The models assume that the stimulus generates normally distributed sensory evidencex with meanS\times d_k/2and variance of 1. The sensory evidencex is compared to a decisioncriterionc to generate a discrimination responseR, which is 1, ifx exceedsc and -1 else.To generate confidence, it is assumed that the confidence variabley is compared to anotherset of criteria\theta_{R,i}, i=1,2,...,L-1, depending on thediscrimination responseR to produce aL-step discrete confidence response.The number of thresholds will be inferred from the number of steps in therating column ofdata.Thus, the parameters shared between all models are:
sensitivity parameters
d_1,...,d_K(K: number of difficulty levels)decision criterion
cconfidence criterion
\theta_{-1,1},\theta_{-1,2},...,\theta_{-1,L-1},\theta_{1,1},\theta_{1,2},...,\theta_{1,L-1}(L: number of confidence categories available for confidence ratings)
How the confidence variabley is computed varies across the different models.The following models have been implemented so far:
Signal detection rating model (SDT)
According to SDT, the same sample of sensoryevidence is used to generate response and confidence, i.e.,y=x and the confidence criteria span from the left andright side of the decision criterionc(Green & Swets, 1966).
Gaussian noise model (GN)
According to the model,y is subject toadditive noise and assumed to be normally distributed around the decisionevidence valuex with a standard deviation\sigma(Maniscalco & Lau, 2016).\sigma is an additional free parameter.
Weighted evidence and visibility model (WEV)
WEV assumes that the observer combines evidence about decision-relevant featuresof the stimulus with the strength of evidence about choice-irrelevant featuresto generate confidence (Rausch et al., 2018). Thus, the WEV model assumes thaty is normallydistributed with a mean of(1-w)\times x+w \times d_k\times R and standard deviation\sigma.The standard deviation quantifies the amount of unsystematic variabilitycontributing to confidence judgments but not to the discrimination judgments.The parameterw represents the weight that is put on the choice-irrelevantfeatures in the confidence judgment.w and\sigma are fitted inaddition to the set of shared parameters.
Post-decisional accumulation model (PDA)
PDA represents the idea of on-going information accumulation after thediscrimination choice (Rausch et al., 2018). The parametera indicates the amount of additionalaccumulation. The confidence variable is normally distributed with meanx+S\times d_k\times a and variancea.For this model the parametera is fitted in addition to the sharedparameters.
Independent Gaussian model (IG)
According to IG,y is sampled independentlyfromx (Rausch & Zehetleitner, 2017).y is normally distributed with a mean ofa\times d_k and varianceof 1 (again as it would scale withm). The additional parametermrepresents the amount of information available for confidence judgmentrelative to amount of evidence available for the discrimination decision and canbe smaller as well as greater than 1.
Independent truncated Gaussian model: HMetad-Version (ITGc)
According to the version of ITG consistentwith the HMetad-method (Fleming, 2017; see Rausch et al., 2023),y is sampled independentlyfromx from a truncated Gaussian distribution with a location parameterofS\times d_k \times m/2 and a scale parameter of 1. The Gaussian distribution ofyis truncated in a way that it is impossible to sample evidence that contradictsthe original decision: IfR = -1, the distribution is truncated to theright ofc. IfR = 1, the distribution is truncated to the leftofc. The additional parameterm represents metacognitive efficiency,i.e., the amount of information available for confidence judgments relative toamount of evidence available for discrimination decisions and can be smalleras well as greater than 1.
Independent truncated Gaussian model: Meta-d'-Version (ITGcm)
According to the version of the ITG consistentwith the original meta-d' method (Maniscalco & Lau, 2012, 2014; see Rausch et al., 2023),y is sampled independently fromx from a truncated Gaussian distribution with a location parameterofS\times d_k \times m/2 and a scale parameterof 1. IfR = -1, the distribution is truncated to the right ofm\times c.IfR = 1, the distribution is truncated to the left ofm\times c.The additional parameterm represents metacognitive efficiency, i.e.,the amount of information available for confidence judgments relative toamount of evidence available for the discrimination decision and can be smalleras well as greater than 1.
Lognormal noise model (logN)
According to logN, the same sampleof sensory evidence is used to generate response and confidence, i.e.,y=x just as in SDT (Shekhar & Rahnev, 2021). However, according to logN, the confidence criteriaare not assumed to be constant, but instead they are affected by noise drawn froma lognormal distribution. In each trial,\theta_{-1,i} is givenbyc - \epsilon_i. Likewise,\theta_{1,i} is given byc + \epsilon_i.\epsilon_i is drawn from a lognormal distribution withthe location parameter\mu_{R,i}=log(|\overline{\theta}_{R,i}- c|) - 0.5 \times \sigma^{2} andscale parameter\sigma.\sigma is a free parameter designed toquantify metacognitive ability. It is assumed that the criterion noise is perfectlycorrelated across confidence criteria, ensuring that the confidence criteriaare always perfectly ordered. Because\theta_{-1,1}, ...,\theta_{-1,L-1},\theta_{1,1}, ...,\theta_{1,L-1} change from trial to trial, they are not estimatedas free parameters. Instead, we estimate the means of the confidence criteria, i.e.,\overline{\theta}_{-1,1}, ...,\overline{\theta}_{-1,L-1}, \overline{\theta}_{1,1}, ... \overline{\theta}_{1,L-1},as free parameters.
Lognormal weighted evidence and visibility model (logWEV)
logWEV is a combination of logN and WEV proposed by Shekhar and Rahnev (2023).Conceptually, logWEV assumes that the observer combines evidence about decision-relevant featuresof the stimulus with the strength of evidence about choice-irrelevant features (Rausch et al., 2018).The model also assumes that noise affecting the confidence decision variable is lognormalin accordance with Shekhar and Rahnev (2021).According to logWEV, the confidence decision variabley is equal toy^*\times R.y^* is sampled from a lognormal distribution with a location parameterof(1-w)\times x\times R + w \times d_k and a scale parameter of\sigma.The parameter\sigma quantifies the amount of unsystematic variabilitycontributing to confidence judgments but not to the discrimination judgments.The parameterw represents the weight that is put on the choice-irrelevantfeatures in the confidence judgment.w and\sigma are fitted inaddition to the set of shared parameters.
Response-congruent evidence model (RCE)
The response-congruent evidence model represents the idea that observers useall available sensory information to make the discrimination decision, but for confidence judgements,they only consider evidence consistent with the selected decision and ignore evidence against the decision (Peters et al., 2017).The model assumes two separate samples of sensory evidence collected in each trial,each belonging to one possible identity of the stimulus.Both samples of sensory evidencex_{-1} andx_1 are sampled from Gaussian distributions with a standard deviations of\sqrt{1/2}.The mean ofx_{-1} is given by(1 - S) \times 0.25 \times d; the meanofx_1 is given by(1 + S) \times 0.25 \times d. The sensory evidenceused for the discrimination choice isx = x_2 - x_1,which implies that the process underlying the discrimination decision is equivalent to standard SDT.The confidence decision variable y isy = - x_1 if the response R is -1 andy = x_2 otherwise.
CASANDRE (CAS)
Generation of the primary choice in the CASANDRE model follows standard SDT assumptions.For confidence, the CASANDRE model assumes an additional stage of processing based on the observer’s estimate of theperceived reliability of their choices (Boundy-Singer et al., 2023).The confidence decision variable y is given byy = \frac{x}{\hat{\sigma}}.\hat{\sigma} represents a noisy internal estimate of the sensory noise.It is assumed that\hat{\sigma} is sampled from a lognormal distribution with a mean fixed to 1and a free noise parameter\sigma.Conceptually,\sigma represents the uncertainty in an individual's estimate of their own sensory uncertainty.
Value
Givesdata.frame with one row for each combination of model andparticipant. There are different columns for the model, the participant ID, and oneone column for each estimated model parameter (parametersnot present in a specific model are filled with NAs).Additional information about the fit is provided in additional columns:
negLogLik(negative log-likelihood of the best-fitting set of parameters),k(number of parameters),N(number of trials),AIC(Akaike Information Criterion; Akaike, 1974),BIC(Bayes information criterion; Schwarz, 1978),AICc(AIC corrected for small samples; Burnham & Anderson, 2002)If length(models) > 1 or models == "all", there will be three additional columns:
Author(s)
Sebastian Hellmann,sebastian.hellmann@tum.de
Manuel Rausch,manuel.rausch@ku.de
References
Akaike, H. (1974). A New Look at the Statistical Model Identification. IEEE Transactions on Automatic Control, AC-19(6), 716–723.doi: 10.1007/978-1-4612-1694-0_16
Burnham, K. P., & Anderson, D. R. (2002). Model selection and multimodel inference: A practical information-theoretic approach. Springer.
Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. Neuroscience of Consciousness, 1, 1–14. doi: 10.1093/nc/nix007
Green, D. M., & Swets, J. A. (1966). Signal detection theory and psychophysics. Wiley.
Maniscalco, B., & Lau, H. (2012). A signal detection theoretic method for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430.
Maniscalco, B., & Lau, H. C. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d’, Response- Specific Meta-d’, and the Unequal Variance SDT Model. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp. 25–66). Springer. doi: 10.1007/978-3-642-45190-4_3
Maniscalco, B., & Lau, H. (2016). The signal processing architecture underlying subjective reports of sensory awareness. Neuroscience of Consciousness, 1, 1–17. doi: 10.1093/nc/niw002
Rausch, M., Hellmann, S., & Zehetleitner, M. (2018). Confidence in masked orientation judgments is informed by both evidence and visibility. Attention, Perception, and Psychophysics, 80(1), 134–154. doi: 10.3758/s13414-017-1431-5
Rausch, M., Hellmann, S., & Zehetleitner, M. (2023). Measures of metacognitive efficiency across cognitive models of decision confidence. Psychological Methods. doi: 10.31234/osf.io/kdz34
Rausch, M., & Zehetleitner, M. (2017). Should metacognition be measured by logistic regression? Consciousness and Cognition, 49, 291–312. doi: 10.1016/j.concog.2017.02.007
Schwarz, G. (1978). Estimating the dimension of a model. The Annals of Statistics, 6(2), 461–464. doi: 10.1214/aos/1176344136
Shekhar, M., & Rahnev, D. (2021). The Nature of Metacognitive Inefficiency in Perceptual Decision Making. Psychological Review, 128(1), 45–70. doi: 10.1037/rev0000249
Shekhar, M., & Rahnev, D. (2023). How Do Humans Give Confidence? A Comprehensive Comparison of Process Models of Perceptual Metacognition. Journal of Experimental Psychology: General. doi:10.1037/xge0001524
Peters, M. A. K., Thesen, T., Ko, Y. D., Maniscalco, B., Carlson, C., Davidson, M., Doyle, W., Kuzniecky, R., Devinsky, O., Halgren, E., & Lau, H. (2017). Perceptual confidence neglects decision-incongruent evidence in the brain. Nature Human Behaviour, 1(0139), 1–21. doi:10.1038/s41562-017-0139
Boundy-Singer, Z. M., Ziemba, C. M., & Goris, R. L. T. (2022). Confidence reflects a noisy decision reliability estimate. Nature Human Behaviour, 7(1), 142–154. doi:10.1038/s41562-022-01464-x
Examples
# 1. Select two subjects from the masked orientation discrimination experimentdata <- subset(MaskOri, participant %in% c(1:2))head(data)# 2. Fit some models to each subject of the masked orientation discrimination experiment # Fitting several models to several subjects takes quite some time # (about 10 minutes per model fit per participant on a 2.8GHz processor # with the default values of nInits and nRestart). # If you want to fit more than just two subjects, # we strongly recommend setting .parallel=TRUE Fits <- fitConfModels(data, models = c("SDT", "ITGc"), .parallel = FALSE)title Compute measures of metacognitive sensitivity (meta-d') and metacognitive efficiency(meta-d'/d') for data from one or several subjects
Description
This function computes the measures for metacognitive sensitivity, meta-d',and metacognitive efficiency, meta-d'/d' (Maniscalco and Lau, 2012, 2014;Fleming, 2017) to data from binary choice tasks with discrete confidencejudgments. Meta-d' and meta-d'/d' are computed using a maximum likelihoodmethod for each subset of thedata argument indicated by different valuesin the columnparticipant, which can represent different subjects as wellas experimental conditions.
Usage
fitMetaDprime(data, model = "ML", nInits = 5, nRestart = 3, .parallel = FALSE, n.cores = NULL)Arguments
data | a
|
model |
|
nInits |
|
nRestart |
|
.parallel |
|
n.cores |
|
Details
The function computes meta-d' and meta-d'/d' either using thehypothetical signal detection model assumed by Maniscalco and Lau (2012, 2014)or the one assumed by Fleming (2014).
The conceptual idea of meta-d' is to quantify metacognition in terms of sensitivityin a hypothetical signal detection rating model describing the primary task,under the assumption that participants had perfect access to the sensory evidenceand were perfectly consistent in placing their confidence criteria (Maniscalco & Lau, 2012, 2014).Using a signal detection model describing the primary task to quantify metacognition allowsa direct comparison between metacognitive accuracy and discrimination performancebecause both are measured on the same scale. Meta-d' can be compared againstthe estimate of the distance between the two stimulus distributionsestimated from discrimination responses, which is referred to as d':If meta-d' equals d', it means that metacognitive accuracy is exactlyas good as expected from discrimination performance.Ifmeta-d' is lower than d', it means that metacognitive accuracy is suboptimal.It can be shown that the implicit model of confidence underlying the meta-d'/d'method is identical to the independent truncated Gaussian model.
The provideddata argument is split into subsets according to the values oftheparticipant column. Then for each subset, the parameters of thehypothetical signal detection model determined by themodel argumentare fitted to the data subset.
The fitting routine first performs a coarse grid search to find promisingstarting values for the maximum likelihood optimization procedure. Then the bestnInitsparameter sets found by the grid search are used as the initial values for separateruns of the Nelder-Mead algorithm implemented inoptim.Each run is restartednRestart times. Warning: meta-d'/d'is only guaranteed to be unbiased from discrimination sensitivity, discriminationbias, and confidence criteria if the data is generated according to theindependent truncated Gaussian model (see Rausch et al., 2023).
Value
Gives data frame with one row for each participant and following columns:
modelgives the model used for the computation of meta-d' (seemodelargument)participantis the participant ID for the respecitve rowdprimeis the discrimination sensitivity index d, calculated using a standard SDT formulacis the discrimination bias c, calculated using a standard SDT formulametaDis meta-d', discrimination sensitivity estimated from confidence judgments conditioned on the responseRatiois meta-d'/d', a quantity usually referred to as metacognitive efficiency.
Author(s)
Manuel Rausch,manuel.rausch@hochschule-rhein-waal.de
References
Fleming, S. M. (2017). HMeta-d: Hierarchical Bayesian estimation of metacognitive efficiency from confidence ratings. Neuroscience of Consciousness, 1, 1–14. doi: 10.1093/nc/nix007
Maniscalco, B., & Lau, H. (2012). A signal detection theoretic method for estimating metacognitive sensitivity from confidence ratings. Consciousness and Cognition, 21(1), 422–430.
Maniscalco, B., & Lau, H. C. (2014). Signal Detection Theory Analysis of Type 1 and Type 2 Data: Meta-d’, Response- Specific Meta-d’, and the Unequal Variance SDT Model. In S. M. Fleming & C. D. Frith (Eds.), The Cognitive Neuroscience of Metacognition (pp. 25–66). Springer. doi: 10.1007/978-3-642-45190-4_3
Rausch, M., Hellmann, S., & Zehetleitner, M. (2023). Measures of metacognitive efficiency across cognitive models of decision confidence. Psychological Methods. doi: 10.31234/osf.io/kdz34
Examples
# 1. Select two subject from the masked orientation discrimination experimentdata <- subset(MaskOri, participant %in% c(1:2))head(data)# 2. Fit meta-d/d for each subject in dataMetaDs <- fitMetaDprime(data, model="F", .parallel = FALSE)Plot the prediction of fitted parameters of one model of confidence over the corresponding data
Description
TheplotConfModelFit function plots the predicted distribution of discrimination responsesand confidence ratings created from adata.frame of parameters obtaind fromfitConfModelsand overlays the predicted distribution over the data to which the model parameters were fitted.
Usage
plotConfModelFit(data, fitted_pars, model = NULL)Arguments
data | a
|
fitted_pars | a |
model |
|
Value
aggplot object with empirically observed distribution of responses and confidence ratingsas bars on the x-axis as a function of discriminability (in the rows) and stimulus(in the columns). Superimposed on the empirical data,the plot also shows the prediction of one selected model as dots.
Author(s)
Manuel Rausch,manuel.rausch@ku.de
Examples
# 1. Fit some models to each subject of the masked orientation discrimination experiment # Normally, the fits should be created using the function fitConfModels # Fits <- fitConfModels(data, models = "WEV", .parallel = TRUE) # Here, we create the dataframe manually because fitting models takes about # 10 minutes per model fit per participant on a 2.8GHz processor. pars <- data.frame(participant = 1:16, d_1 = c(0.20, 0.05, 0.41, 0.03, 0.00, 0.01, 0.11, 0.03, 0.19, 0.08, 0.00, 0.24, 0.00, 0.00, 0.25, 0.01), d_2 = c(0.61, 0.19, 0.86, 0.18, 0.17, 0.39, 0.69, 0.14, 0.45, 0.30, 0.00, 0.27, 0.00, 0.05, 0.57, 0.23), d_3 = c(1.08, 1.04, 2.71, 2.27, 1.50, 1.21, 1.83, 0.80, 1.06, 0.68, 0.29, 0.83, 0.77, 2.19, 1.93, 0.54), d_4 = c(3.47, 4.14, 6.92, 4.79, 3.72, 3.24, 4.55, 2.51, 3.78, 2.40, 1.95, 2.55, 4.59, 4.27, 4.08, 1.80), d_5 = c(4.08, 5.29, 7.99, 5.31, 4.53, 4.66, 6.21, 4.67, 5.85, 3.39, 3.39, 4.42, 6.48, 5.35, 5.28, 2.87), c = c(-0.30, -0.15, -1.37, 0.17, -0.12, -0.19, -0.12, 0.41, -0.27, 0.00, -0.19, -0.21, -0.91, -0.26, -0.20, 0.10), theta_minus.4 = c(-2.07, -2.04, -2.76, -2.32, -2.21, -2.33, -2.27, -2.29, -2.69, -3.80, -2.83, -1.74, -2.58, -3.09, -2.20, -1.57), theta_minus.3 = c(-1.25, -1.95, -1.92, -2.07, -1.62, -1.68, -2.04, -2.02, -1.84, -3.37, -1.89, -1.44, -2.31, -2.08, -1.53, -1.46), theta_minus.2 = c(-0.42, -1.40, -0.37, -1.96, -1.45, -1.27, -1.98, -1.66, -1.11, -2.69, -1.60, -1.25, -2.21, -1.68, -1.08, -1.17), theta_minus.1 = c(0.13, -0.90, 0.93, -1.71, -1.25, -0.59, -1.40, -1.00, -0.34, -1.65, -1.21, -0.76, -1.99, -0.92, -0.28, -0.99), theta_plus.1 = c(-0.62, 0.82, -2.77, 2.01, 1.39, 0.60, 1.51, 0.90, 0.18, 1.62, 0.99,0.88, 1.67, 0.92, 0.18, 0.88), theta_plus.2 = c(0.15, 1.45, -1.13,2.17, 1.61, 1.24, 1.99, 1.55, 0.96, 2.44, 1.53, 1.66, 2.00, 1.51, 1.08, 1.05), theta_plus.3 = c(1.40, 2.24, 0.77, 2.32, 1.80, 1.58, 2.19, 2.19, 1.54, 3.17, 1.86, 1.85, 2.16, 2.09, 1.47, 1.70), theta_plus.4 = c(2.19, 2.40, 1.75, 2.58, 2.53, 2.24, 2.59, 2.55, 2.58, 3.85, 2.87, 2.15, 2.51, 3.31, 2.27, 1.79), sigma = c(1.01, 0.64, 1.33, 0.39, 0.30, 0.75, 0.75, 1.07, 0.65, 0.29, 0.31, 0.78, 0.39, 0.42, 0.69, 0.52), w = c(0.54, 0.50, 0.38, 0.38, 0.36, 0.44, 0.48, 0.48, 0.52, 0.46, 0.53, 0.48, 0.29, 0.45, 0.51, 0.63))# 2. Plot the predicted probabilities based on model and fitted parameters # against the observed relative frequencies. PlotFitWEV <- plotConfModelFit(MaskOri, pars, model="WEV") PlotFitWEVSimulate data according to a static model of confidence
Description
This function generates a data frame with random trials generated according tothe computational model of decision confidence specified in themodel argumentwith given parameters.Simulations can be used to visualize and test qualitative model predictions(e.g. using previously fitted parameters returned byfitConf).SeefitConf for a full mathematical description of all modelsand their parameters.
Usage
simConf(model = "SDT", paramDf)Arguments
model |
|
paramDf | a
|
Details
The function generates aboutN trials per row with the provided parametersin the data frame. The output includes a columnparticipant indicating therow ID of the simulated data. The values of theparticipant column may becontrolled by the user, by including aparticipant column in the inputparamDf. Note that the values of this column have to be unique! If noparticipant column is present in the input, the row numbers will be usedas row IDs.
The number of simulated trials for each row of parameters may slightlydeviate from the providedN.Precisely, if there are K levels of sensitivity (i.e. there are columnsd1, d2, ..., dK), the function simulatesround(N/2/K) trials per stimulusidentity (2 levels) and level of sensitivity (K levels).
Simulation is performed following the generative process structure of the models.SeefitConf for a detailed description of the different models.
Value
a dataframe with aboutnrow(paramDf)*N rows (see Details),and the following columns:
participantgiving the row ID of the simulation (see Details)stimulusgiving the category of the stimulus (-1 or 1)only, if more than 1 sensitivity parameter (
d1,d2,...) is provided:diffCondrepresenting the difficulty condition (values correspond tothe levels of the sensitivity parameters, i.e. diffCond=1 representssimulated trials with sensitivityd1)responsegiving the response category (-1 or 1, corresponding to the stimulus categories)ratinggiving the discrete confidence rating (integer, number ofcategories depends on the number of confidence criteria provided in the parameters)correctgiving the accuracy of the response (0 incorrect, 1 correct)ratingssame asratingbut as a factor
Author(s)
Manuel Rausch,manuel.rausch@hochschule-rhein-waal.de
Examples
# 1. define some parametersparamDf <- data.frame(d_1 = 0, d_2 = 2, d_3 = 4,c = .0,theta_minus.2 = -2, theta_minus.1 = -1, theta_plus.1 = 1, theta_plus.2 = 2,sigma = 1/2, w = 0.5, N = 500)# 2. Simulate datasetSimulatedData <- simConf(model = "WEV", paramDf)