Movatterモバイル変換


[0]ホーム

URL:


Cumulative Incidence Regression

Klaus Holst & Thomas Scheike

2025-10-21

The cifreg function can fit the Fine-Gray model and the logit-linkcumulative incidence function for the cause of interest for competingrisks, and is completely scalable, that is, linear in the data. Thisincludes computation of standard errors that is also linear in data. Inaddition for the Fine-Gray model predictions can be provided standarderrors for specific time-points based on influence functions for thebaseline and the regression coeficients. The function can thus be usedfor large data.

In addition and to summarize

Fine-Gray model

considered\[\begin{align*} U^{FG}_{n}(\beta) = \sum_{i=0}^{n} \int_0^{+\infty} \left( X_i-E_n(t,\beta) \right) w_i(t,X_i) dN_{1,i}(t) \text{ where } E_n(t,\beta)=\frac{\tilde S_1(t,\beta) }{\tilde S_0(t,\beta)},\end{align*}\] with\(w_i(t,X_i) =\frac{G_c(t,X_i)}{G_c(T_i \wedge t,X_i)} I( C_i > T_i \wedge t)\) ,\(\tilde S_k(t,\beta) =\sum_{j=1}^n X_j^k \exp(X_j^T\beta) Y_{1,j}(t)\) for\(k=0,1\), and with with\(\tilde Y_{1,i}(t) = Y_{1,i}(t) w_i(t,X_i)\)for\(i=1,...,n\).\(w_i(t)\) needs to be replaced by anestimator of the censoring distribution, and since it does not depend on\(X\) the\(\hat w_i(t) = \frac{\hat G_c(t,X_i)}{\hat G_c(T_i\wedge t,X_i)} I(C_i > T_i \wedge t)\) where\(\hat G_c\) is the Kaplan-Meier estimator ofthe censoring distribution.

In this article we briefly introduce some functions for doingcumulative incidence regression, and how to augment the Fine-Grayestimator.

First we simulate some competing risks data using some utilityfunctions.

We simulate data with two causes based on the Fine-Gray model:\[\begin{align}F_1(t,X) & = P(T\leq t, \epsilon=1|X)=( 1 - exp(-\Lambda_1(t)\exp(X^T \beta_1))) \\F_2(t,X) & = P(T\leq t, \epsilon=2|X)= ( 1 - exp(-\Lambda_2(t)\exp(X^T \beta_2))) \cdot (1 - F_1(\infty,X)) \end{align}\] where the baselines are given as\(\Lambda_j(t) = \rho_j (1- exp(-t/\nu_j))\)for\(j=1,2\), and the\(X\) being two independent binomials.Alternatively, one can also replace the FG-model with a logistic link\(\mbox{expit}( \Lambda_j(t) + \exp(X^T\beta_j))\).

The advantage of the model is that it is easy to fit and to getstandard errors, and that it is quite flexible essentially being aCox-model. On the downside is that the coefficients are quite hard tointerpret since they are the\(cloglog\) coefficients of\(1-F_1(t,X)\). Specifically,\[\begin{align}\log(-\log( 1-F_1(t,X_1+1,X_2))) - \log(-\log( 1-F_1(t,X_1,X_2))) &= \beta_1,\end{align}\] so the effect is\(\beta_1\) of\(X_1\) is on\(1-F_1(t,X)\) on the\(cloglog\) scale.

library(mets)options(warn=-1)set.seed(1000)# to control output in simulatins for p-values below. rho1<-0.2; rho2<-10 n<-400 beta=c(0.0,-0.1,-0.5,0.3)## beta1=c(0.0,-0.1); beta2=c(-0.5,0.3) dats<-simul.cifs(n,rho1,rho2,beta,rc=0.5,rate=7)dtable(dats,~status)#>#> status#>   0   1   2#> 127  12 261dsort(dats)<-~time

We have a look at the non-parametric cumulative incidence curves

par(mfrow=c(1,2)) cifs1<-cif(Event(time,status)~strata(Z1,Z2),dats,cause=1)plot(cifs1) cifs2<-cif(Event(time,status)~strata(Z1,Z2),dats,cause=2)plot(cifs2)

Now fitting the Fine-Gray model

 fg<-cifregFG(Event(time,status)~Z1+Z2,data=dats,cause=1)summary(fg)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.69686  0.38760  0.38882  0.0722#> Z2 -0.85929  0.62453  0.61478  0.1689#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  2.00744 0.93911 4.2911#> Z2  0.42346 0.12451 1.4402 dd<-expand.grid(Z1=c(-1,1),Z2=0:1) pfg<-predict(fg,dd)plot(pfg,ylim=c(0,0.2))

and GOF based on cumulative residuals (Li et al. 2015)

gofFG(Event(time,status)~Z1+Z2,data=dats,cause=1)#> Cumulative score process test for Proportionality:#>    Sup|U(t)|  pval#> Z1  3.011461 0.124#> Z2  1.373513 0.227

showing no problem with the proportionality of the model.

SE’s for the baseline and predictions of FG

The standard errors reported for the FG-estimator are based on thei.i.d decompostion (influence functions) of the estimator that we givelater. A similar decompostion exist for the baseline and is needed whenstandard errors of predictions are computed. These are a bit harder tocompute for all time-points simultaneously, but they can be obtained forspecific timepoints jointly with the iid decomposition of the regressioncoefficients and then used to get standard errors for predictions.

We here plot the predictions with jittered confidence intervals forthe predictions at time point 5

### predictions with CI based on iid decomposition of baseline and betafg<-cifregFG(Event(time,status)~Z1+Z2,data=dats,cause=1)Biid<-iidBaseline(fg,time=5)pfgse<-FGprediid(Biid,dd)pfgse#>            pred    se-log       lower     upper#> [1,] 0.04253879 0.7418354 0.009938793 0.1820692#> [2,] 0.16069100 0.3946377 0.074143886 0.3482633#> [3,] 0.01823957 0.9410399 0.002884032 0.1153531#> [4,] 0.07149610 0.4611261 0.028958169 0.1765199plot(pfg,ylim=c(0,0.2))for (iin1:4)lines(c(5,5)+i/10,pfgse[i,3:4],col=i,lwd=2)

The iid decompostions are stored inside Biid, in addition we notethat the iid decompostions for\(\hat \beta -\beta_0\) are obtained by the command iid()

Comparison

We compare with the cmprsk function, that gives exactly the same, butwithout running it to avoid dependencies:

run<-0if (run==1) {library(cmprsk)mm<-model.matrix(~Z1+Z2,dats)[,-1]cr<-with(dats,crr(time,status,mm))cbind(cr$coef,diag(cr$var)^.5,fg$coef,fg$se.coef,cr$coef-fg$coef,diag(cr$var)^.5-fg$se.coef)#          [,1]      [,2]       [,3]      [,4]          [,5]          [,6]# Z1  0.6968603 0.3876029  0.6968603 0.3876029 -2.442491e-15 -2.553513e-15# Z2 -0.8592892 0.6245258 -0.8592892 0.6245258 -2.997602e-15  1.776357e-15}

When comparing with the results from the coxph based on setting upthe data using the finegray function, we get the same estimates but notethat the standard errors of the coxph is missing a term and thereforeslightly different. When comparing to the estimates from coxph missingthe additional censoring term we see that we get also the same standarderrors

if (run==1) {library(survival) dats$id<-1:nrow(dats) dats$event<-factor(dats$status,0:2,labels=c("censor","death","other")) fgdats<-finegray(Surv(time,event)~.,data=dats) coxfg<- survival::coxph(Surv(fgstart, fgstop, fgstatus)~ Z1+Z2+cluster(id),weight=fgwt,data=fgdats) fg0<-cifreg(Event(time,status)~Z1+Z2,data=dats,cause=1,propodds=NULL)cbind( coxfg$coef,fg0$coef, coxfg$coef-fg0$coef)#          [,1]       [,2]          [,3]# Z1  0.6968603  0.6968603 -1.110223e-16# Z2 -0.8592892 -0.8592892 -1.110223e-15cbind(diag(coxfg$var)^.5,fg0$se.coef,diag(coxfg$var)^.5-fg0$se.coef)#           [,1]      [,2]          [,3]# [1,] 0.3889129 0.3876029  0.0013099915# [2,] 0.6241225 0.6245258 -0.0004033148cbind(diag(coxfg$var)^.5,fg0$se1.coef,diag(coxfg$var)^.5-fg0$se1.coef)#           [,1]      [,2]          [,3]# [1,] 0.3889129 0.3889129 -2.331468e-15# [2,] 0.6241225 0.6241225  2.553513e-15}

We also remove all censorings from the data to compare the estimateswith those based on coxph, and observe that the estimates as well as thestandard errors agree

datsnc<-dtransform(dats,status=2,status==0)dtable(datsnc,~status)#>#> status#>   1   2#>  12 388datsnc$id<-1:ndatsnc$entry<-0max<-max(dats$time)+1## for cause 2 add risk interavaldatsnc2<-subset(datsnc,status==2)datsnc2<-transform(datsnc2,entry=time)datsnc2<-transform(datsnc2,time=max)datsncf<-rbind(datsnc,datsnc2)#cifnc<-cifreg(Event(time,status)~Z1+Z2,data=datsnc,cause=1,propodds=NULL)cc<-phreg(Surv(entry,time,status==1)~Z1+Z2+cluster(id),datsncf)cbind(cc$coef-cifnc$coef,diag(cc$var)^.5-diag(cifnc$var)^.5)#>            [,1]          [,2]#> Z1 1.221245e-15 -1.554312e-15#> Z2 3.996803e-15  1.998401e-15#            [,1]          [,2]# Z1 1.332268e-15 -4.440892e-16# Z2 4.218847e-15  2.220446e-16

the cmprsk also gives the same

if (run==1) {library(cmprsk) mm<-model.matrix(~Z1+Z2,datsnc)[,-1] cr<-with(datsnc,crr(time,status,mm))cbind(cc$coef-cr$coef,diag(cr$var)^.5-diag(cc$var)^.5)#             [,1]         [,2]# Z1 -4.218847e-15 1.443290e-15# Z2  7.549517e-15 1.110223e-16}

Strata dependent Censoring weights

We can improve efficiency and avoid bias by allowing the censoringweights to depend on the covariates

 fgcm<-cifregFG(Event(time,status)~Z1+Z2,data=dats,cause=1,cens.model=~strata(Z1,Z2))summary(fgcm)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.54277  0.37188  0.39352  0.1444#> Z2 -0.91846  0.61886  0.61447  0.1378#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  1.72077 0.83019 3.5667#> Z2  0.39913 0.11867 1.3424summary(fg)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.69686  0.38760  0.38882  0.0722#> Z2 -0.85929  0.62453  0.61478  0.1689#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  2.00744 0.93911 4.2911#> Z2  0.42346 0.12451 1.4402

We note that the standard errors are slightly smaller for the moreefficient estimator.

The influence functions of the FG-estimator is given by ,
\[\begin{align*}\phi_i^{FG} & = \int (X_i- e(t)) \tilde w_i(t) dM_{i1}(t,X_i) +\int \frac{q(t)}{\pi(t)} dM_{ic}(t), \\ & = \phi_i^{FG,1} + \phi_i^{FG,2},\end{align*}\] where the first term is what would be achieved fora known censoring distribution, and the second term is due to thevariability from the Kaplan-Meier estimator. Where\(M_{ic}(t) = N_{ic}(t) - \int_0^t Y_i(s) d\Lambda_c(s)\) with\(M_{ic}\) thestandard censoring martingale.

The function\(q(t)\) that reflectsthat the censoring only affects the terms related to cause “2” jumps,can be written as (see Appendix B2)\[\begin{align*} q(t) & = E( H(t,X) I(T \leq t, \epsilon=2) I(C >T)/G_c(T)) = E( H(t,X) F_2(t,X) ),\end{align*}\] with\(H(t,X)= \int_t^{\infty} (X- e(s)) G(s) d \Lambda_1(s,X)\) and since\(\pi(t)=E(Y(t))=S(t) G_c(t)\).

In the case where the censoring weights are stratified (based on\(X\)) we get the influence functionsrelated to the censoring term with\[\begin{align*} q(t,X) & = E( H(t,X) I(T \leq t, \epsilon=2) I(T <C)/G_c(T,X) | X) = H(t,X) F_2(t,X),\end{align*}\] so that the influence function becomes\[\begin{align*}\int (X-e(t)) w(t) dM_1(t,X) + \int H(t,X) \frac{F_2(t,X)}{S(t,X)}\frac{1}{G_c(t,X)} dM_c(t,X).\end{align*}\] with\(H(t,X)= \int_t^{\infty} (X- e(s)) G(s,X) d \Lambda_1(s,X)\).

Augmenting the FG-estimator

Rather than using a larger censoring model we can also compute theaugmentation term directly and then fit the FG-model based on thisaugmentation term and do a couple of iterations

  fgaugS<-FG_AugmentCifstrata(Event(time,status)~Z1+Z2+strata(Z1,Z2),data=dats,cause=1,E=fg$E)summary(fgaugS)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.69686  0.34898  0.38882  0.0458#> Z2 -0.85929  0.60243  0.61478  0.1538#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  2.00744 1.01296 3.9783#> Z2  0.42346 0.13002 1.3791  fgaugS2<-FG_AugmentCifstrata(Event(time,status)~Z1+Z2+strata(Z1,Z2),data=dats,cause=1,E=fgaugS$E)summary(fgaugS2)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.69686  0.34898  0.38882  0.0458#> Z2 -0.85929  0.60243  0.61478  0.1538#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  2.00744 1.01296 3.9783#> Z2  0.42346 0.13002 1.3791  fgaugS3<-FG_AugmentCifstrata(Event(time,status)~Z1+Z2+strata(Z1,Z2),data=dats,cause=1,E=fgaugS2$E)summary(fgaugS3)#>#>    n events#>  400     12#>#>  400 clusters#> coeffients:#>    Estimate     S.E.  dU^-1/2 P-value#> Z1  0.69686  0.34898  0.38882  0.0458#> Z2 -0.85929  0.60243  0.61478  0.1538#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  2.00744 1.01296 3.9783#> Z2  0.42346 0.13002 1.3791

Again we note slightly smaller standard errors when augmenting theestimator.

The function compute the augmentation term for fixed\(E(t)\) based on the current\(\hat \beta\)\[\begin{align*}U_n^{A} = \sum_{i=1}^n\int_{0}^{+\infty} \frac{F_2(t,X_i)}{S(t,X_i)G_c(t,X_i)} H(t,X_i,E,G_c,\Lambda_1) dM_{ci}(t)\end{align*}\] using working models based on stratification toget\(F_1^s\) and\(F_2^s\) where the strata are given by\(strata()\) in the call. Then fits the FGmodel so solve the\[\begin{align*}U_n^{A}(\beta_p) + U^{FG}_{n}(\beta) = 0.\end{align*}\]

Then we may iterate to get a solution to the augmented score equation\[\begin{align*}U_n^{A}(\beta_\infty) + U^{FG}_{n}(\beta_\infty) = 0.\end{align*}\]

The censoring model here is one overall Kaplan-Meier.

The influence funtion for the augmented estimator is\[\begin{align*}\int (X-e(t)) w(t) dM_1(t,X) + \int H(t,X) \frac{F_2(t,X)}{S(t,X)}\frac{1}{G_c(t)} dM_c.\end{align*}\] and standard errors are based on this formula.

Logistic-link

 rho1<-0.2; rho2<-10 n<-400 beta=c(0.0,-0.1,-0.5,0.3) dats<-simul.cifs(n,rho1,rho2,beta,rc=0.5,rate=7,type="logistic")dtable(dats,~status)#>#> status#>   0   1   2#> 166  16 218dsort(dats)<-~time

The model where\[\begin{align*}\mbox{logit}(F_1(t,X)) & = \alpha(t) + X^T \beta\end{align*}\] that then leads to OR interpretation of the\(F_1\), can also be fitted easily, however,the standard errors are harder to compute and only approximative(assuming that the censoring weights are known) but this gives typicallyonly a small error. In the\({{\bftimereg}}\)-package the model can be fitted using differentestimators that are more efficient using different weights but this ismuch slower.

Fitting the model and getting OR’s

 or<-cifreg(Event(time,status)~Z1+Z2,data=dats,cause=1)summary(or)#>#>    n events#>  400     16#>#>  400 clusters#> coeffients:#>    Estimate    S.E. dU^-1/2 P-value#> Z1  0.10017 0.25562 0.25215  0.6952#> Z2  0.21763 0.50407 0.50346  0.6659#>#> exp(coeffients):#>    Estimate    2.5%  97.5%#> Z1  1.10535 0.66976 1.8242#> Z2  1.24313 0.46287 3.3387

SessionInfo

sessionInfo()#> R version 4.5.1 (2025-06-13)#> Platform: aarch64-apple-darwin25.0.0#> Running under: macOS Tahoe 26.0.1#>#> Matrix products: default#> BLAS:   /Users/kkzh/.asdf/installs/R/4.5.1/lib/R/lib/libRblas.dylib#> LAPACK: /Users/kkzh/.asdf/installs/R/4.5.1/lib/R/lib/libRlapack.dylib;  LAPACK version 3.12.1#>#> locale:#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8#>#> time zone: Europe/Copenhagen#> tzcode source: internal#>#> attached base packages:#> [1] stats     graphics  grDevices utils     datasets  methods   base#>#> other attached packages:#> [1] timereg_2.0.7  survival_3.8-3 mets_1.3.8#>#> loaded via a namespace (and not attached):#>  [1] cli_3.6.5           knitr_1.50          rlang_1.1.6#>  [4] xfun_0.53           jsonlite_2.0.0      future.apply_1.20.0#>  [7] listenv_0.9.1       lava_1.8.1          htmltools_0.5.8.1#> [10] sass_0.4.10         rmarkdown_2.30      grid_4.5.1#> [13] evaluate_1.0.5      jquerylib_0.1.4     fastmap_1.2.0#> [16] numDeriv_2016.8-1.1 yaml_2.3.10         mvtnorm_1.3-3#> [19] lifecycle_1.0.4     compiler_4.5.1      codetools_0.2-20#> [22] ucminf_1.2.2        Rcpp_1.1.0          future_1.67.0#> [25] lattice_0.22-7      digest_0.6.37       R6_2.6.1#> [28] parallelly_1.45.1   parallel_4.5.1      splines_4.5.1#> [31] Matrix_1.7-4        bslib_0.9.0         tools_4.5.1#> [34] globals_0.18.0      cachem_1.1.0

[8]ページ先頭

©2009-2025 Movatter.jp