Movatterモバイル変換


[0]ホーム

URL:


Evaluate by ID/group

Ludvig Renbo Olsen

2025-11-09

Abstract

In this vignette, we learn how to evaluate predictions on the IDlevel withevaluate().  
 
Contact the author atr-pkgs@ludvigolsen.dk  
 

Introduction

When we have groups of observations (e.g. a participant ID), we aresometimes more interested in the overall prediction for the group thanthose at the observation-level.

Say we have a dataset with 10 observations per participant and amodel that predicts whether a participant has an autism diagnosis ornot. While the model will predict each of the 10 observations, it’sreally the overall prediction for the participant that we are interestedin.

evaluate() has two approaches to performing theevaluation on the ID level:averaging andvoting.

Averaging

In averaging, we simply average the predicted probabilities for theparticipant. This is the default approach as it maintains informationabout how certain our model is about itsclassprediction. That is, if all observations have a 60% predictedprobability of an autism diagnosis, that should be considereddifferently than 90%.

Voting

In voting, we simply count the predictions of each outcome class andassign the class with the most predictions to the participant.

If 7 out of 10 of the observations are predicted as having no autismdiagnosis, that becomes the prediction for the participant.

ID evaluation with evaluate()

We will use the simpleparticipant.scores dataset as ithas 3 rows per participant and a diagnosis column that we can evaluatepredictions against. Let’s add predicted probabilities and diagnoses andhave a look:

library(cvms)library(knitr)# kable()library(dplyr)set.seed(74)# Prepare datasetdata<- participant.scores%>%as_tibble()# Add probabilities and predicted classesdata[["probability"]]<-runif(nrow(data))data[["predicted diagnosis"]]<-ifelse(data[["probability"]]>0.5,1,0)data%>%head(10)%>%kable()
participantagediagnosisscoresessionprobabilitypredicted diagnosis
12011010.70461621
12012420.48000450
12014530.19601760
22302410.93697071
22304020.86983021
22306730.21403180
32711510.02408530
32713020.85479591
32714030.70271531
42103510.95798171

We tellevaluate() to aggregate the predictions by theparticipant column with themean (averaging)method.

Note: It is assumed that the target class is constant withinthe IDs. I.e., that the participant has the same diagnosis in allobservations.

ev<-evaluate(data = data,target_col ="diagnosis",prediction_cols ="probability",id_col ="participant",id_method ="mean",type ="binomial")ev#> # A tibble: 1 × 19#>   `Balanced Accuracy` Accuracy    F1 Sensitivity Specificity `Pos Pred Value`#>                 <dbl>    <dbl> <dbl>       <dbl>       <dbl>            <dbl>#> 1               0.292      0.3 0.364       0.333        0.25              0.4#> # ℹ 13 more variables: `Neg Pred Value` <dbl>, AUC <dbl>, `Lower CI` <dbl>,#> #   `Upper CI` <dbl>, Kappa <dbl>, MCC <dbl>, `Detection Rate` <dbl>,#> #   `Detection Prevalence` <dbl>, Prevalence <dbl>, Predictions <list>,#> #   ROC <named list>, `Confusion Matrix` <list>, Process <list>

ThePredictions column contains the averagedpredictions:

ev$Predictions[[1]]%>%kable()
TargetPredictionSDPredicted Classparticipantid_method
10.46021280.254876201mean
00.67361090.399420412mean
10.52719880.442294613mean
00.79745760.170344814mean
10.58876990.473822115mean
10.35266300.230252506mean
10.23337580.191376307mean
10.39560150.320737908mean
00.33743610.030478509mean
00.59889690.0675830110mean

Let’s plot the confusion matrix as well:

# Note: If ev had multiple rows, we would have to# pass ev$`Confusion Matrix`[[1]] to# plot the first row's confusion matrixplot_confusion_matrix(ev)

We can have a better look at the metrics:

ev_metrics<-select_metrics(ev)ev_metrics%>%select(1:9)%>%kable(digits =5)
Balanced AccuracyAccuracyF1SensitivitySpecificityPos Pred ValueNeg Pred ValueAUCLower CI
0.291670.30.363640.333330.250.40.20.208330
ev_metrics%>%select(10:14)%>%kable(digits =5)
Upper CIKappaMCCDetection RateDetection Prevalence
0.62475-0.4-0.408250.20.5

Using voting

We can use themajority (voting) method for the IDaggregation instead:

ev_2<-evaluate(data = data,target_col ="diagnosis",prediction_cols ="probability",id_col ="participant",id_method ="majority",type ="binomial")ev_2#> # A tibble: 1 × 19#>   `Balanced Accuracy` Accuracy    F1 Sensitivity Specificity `Pos Pred Value`#>                 <dbl>    <dbl> <dbl>       <dbl>       <dbl>            <dbl>#> 1               0.292      0.3 0.364       0.333        0.25              0.4#> # ℹ 13 more variables: `Neg Pred Value` <dbl>, AUC <dbl>, `Lower CI` <dbl>,#> #   `Upper CI` <dbl>, Kappa <dbl>, MCC <dbl>, `Detection Rate` <dbl>,#> #   `Detection Prevalence` <dbl>, Prevalence <dbl>, Predictions <list>,#> #   ROC <named list>, `Confusion Matrix` <list>, Process <list>

Now thePredictions column looks as follows:

ev_2$Predictions[[1]]%>%kable()
TargetPredictionPredicted Classparticipantid_method
1001majority
0112majority
1113majority
0114majority
1115majority
1006majority
1007majority
1008majority
0009majority
01110majority

In this case, thePredicted Class column is identical tothat in the averaging approach. We just don’t have the probabilities totell us, how sure the model is about that prediction.

Per model

If we have predictions from multiple models, we can group the dataframe and get the results per model.

Let’s duplicate the dataset and change the predictions. We thencombine the datasets and add amodel column for indicatingwhich of the data frames the observation came from:

# Duplicate data framedata_2<- data# Change the probabilities and predicted classesdata_2[["probability"]]<-runif(nrow(data))data_2[["predicted diagnosis"]]<-ifelse(data_2[["probability"]]>0.5,1,0)# Combine the two data framesdata_multi<- dplyr::bind_rows(data, data_2,.id ="model")data_multi#> # A tibble: 60 × 8#>    model participant   age diagnosis score session probability#>    <chr> <fct>       <dbl>     <dbl> <dbl>   <int>       <dbl>#>  1 1     1              20         1    10       1      0.705#>  2 1     1              20         1    24       2      0.480#>  3 1     1              20         1    45       3      0.196#>  4 1     2              23         0    24       1      0.937#>  5 1     2              23         0    40       2      0.870#>  6 1     2              23         0    67       3      0.214#>  7 1     3              27         1    15       1      0.0241#>  8 1     3              27         1    30       2      0.855#>  9 1     3              27         1    40       3      0.703#> 10 1     4              21         0    35       1      0.958#> # ℹ 50 more rows#> # ℹ 1 more variable: `predicted diagnosis` <dbl>

We can now group the data frame by themodel column andrun the evaluation again:

ev_3<- data_multi%>%  dplyr::group_by(model)%>%evaluate(target_col ="diagnosis",prediction_cols ="probability",id_col ="participant",id_method ="mean",type ="binomial"  )ev_3#> # A tibble: 2 × 20#>   model `Balanced Accuracy` Accuracy    F1 Sensitivity Specificity#>   <chr>               <dbl>    <dbl> <dbl>       <dbl>       <dbl>#> 1 1                   0.292      0.3 0.364       0.333        0.25#> 2 2                   0.375      0.4 0.5         0.5          0.25#> # ℹ 14 more variables: `Pos Pred Value` <dbl>, `Neg Pred Value` <dbl>,#> #   AUC <dbl>, `Lower CI` <dbl>, `Upper CI` <dbl>, Kappa <dbl>, MCC <dbl>,#> #   `Detection Rate` <dbl>, `Detection Prevalence` <dbl>, Prevalence <dbl>,#> #   Predictions <list>, ROC <named list>, `Confusion Matrix` <list>,#> #   Process <list>

ThePredictions for the second model looks asfollows:

ev_3$Predictions[[2]]%>%kable()
modelTargetPredictionSDPredicted Classparticipantid_method
210.33020170.300276301mean
200.60402420.285493512mean
210.73426510.265316613mean
200.63839180.379930514mean
210.45517320.341781005mean
210.68082810.362616616mean
210.45367400.378458407mean
210.62815010.450602918mean
200.70004110.149074519mean
200.46303440.4344227010mean

In'gaussian' evaluation

This kind of ID aggregation is also available for the'gaussian' evaluation (e.g. for linear regression models),although only with the averaging approach. Again, it is assumed that thetarget value is constant for all observations by a participant (like theage column in our dataset).

We add apredicted age column to our initialdataset:

data[["predicted age"]]<-sample(20:45,size =30,replace =TRUE)

We evaluate the predicted age, aggregated by participant:

ev_4<-evaluate(data = data,target_col ="age",prediction_cols ="predicted age",id_col ="participant",id_method ="mean",type ="gaussian")ev_4#> # A tibble: 1 × 8#>    RMSE   MAE `NRMSE(IQR)`  RRSE   RAE RMSLE Predictions       Process#>   <dbl> <dbl>        <dbl> <dbl> <dbl> <dbl> <list>            <list>#> 1  10.3   8.7        0.984  1.48  1.45 0.340 <tibble [10 × 5]> <prcss_n_>

ThePredictions column looks as follows:

ev_4$Predictions[[1]]%>%kable()
TargetPredictionSDparticipantid_method
2035.666678.3266641mean
2333.3333310.2143692mean
2735.333335.6862413mean
2130.000004.5825764mean
3228.666675.5075705mean
3143.333331.1547006mean
4339.000005.1961527mean
2140.333332.5166118mean
3435.333335.5075709mean
3235.333337.57187810mean

On average, we predictparticipant 1 to have the age35.66.

Results for each ID

If our targets are not constant within the IDs, we might beinterested in the ID-level evaluation. E.g. how well it predicted thescore for each of the participants.

We add apredicted score column to our dataset:

data[["predicted score"]]<-round(runif(30,10,81))

Now, we group the data frame by theparticipant columnand evaluate the predicted scores:

data%>%  dplyr::group_by(participant)%>%evaluate(target_col ="score",prediction_cols ="predicted score",type ="gaussian"  )#> # A tibble: 10 × 9#>    participant  RMSE   MAE `NRMSE(IQR)`  RRSE   RAE RMSLE Predictions Process#>    <fct>       <dbl> <dbl>        <dbl> <dbl> <dbl> <dbl> <list>      <list>#>  1 1           13.8  13.7         0.787 0.957 1.10  0.683 <tibble>    <prcss_n_>#>  2 2           32.4  26.3         1.50  1.82  1.69  0.946 <tibble>    <prcss_n_>#>  3 3           12.8  10.7         1.03  1.25  1.2   0.549 <tibble>    <prcss_n_>#>  4 4            9.15  7.67        0.425 0.513 0.486 0.154 <tibble>    <prcss_n_>#>  5 5           24.1  17.3         1.27  1.47  1.15  0.566 <tibble>    <prcss_n_>#>  6 6           34.2  33.3         4.27  5.12  5.56  0.895 <tibble>    <prcss_n_>#>  7 7           44.7  40           2.98  3.45  3.33  1.21  <tibble>    <prcss_n_>#>  8 8            9.80  8           0.700 0.854 0.818 0.306 <tibble>    <prcss_n_>#>  9 9           22.6  21.3         1.37  1.66  1.81  0.447 <tibble>    <prcss_n_>#> 10 10          29.3  28           1.13  1.38  1.62  0.556 <tibble>    <prcss_n_>

Participant 4 has the lowest prediction error whileparticipant 7 has the highest.

This approach is similar to what themost_challenging()function does:

# Extract the ~20% observations with highest prediction errormost_challenging(data = data,type ="gaussian",obs_id_col ="participant",target_col ="score",prediction_cols ="predicted score",threshold =0.20)#> # A tibble: 2 × 4#>   participant   MAE  RMSE  `>=`#>   <fct>       <dbl> <dbl> <dbl>#> 1 7            40    44.7  32.7#> 2 6            33.3  34.2  32.7

This concludes the vignette. If any elements are unclear you canleave feedback in a mail or in a GitHub issue :-)


[8]ページ先頭

©2009-2025 Movatter.jp