Movatterモバイル変換


[0]ホーム

URL:


A cure model calendar-based design

Introduction

We present a study design for a time-to-event outcome based on a curemodel (Rodrigues et al. (2009)). In thiscase, it is assumed that tail behavior for the survival curve is ofsubstantial interest and there is no desire to stop and do a finalanalysis before substantial follow-up through 4 years has been allowedto accrue. It is assumed further if substantial events have not accruedin this time period, then some sacrifice in power would not beunreasonable. Due to this as well as substantial variability in eventaccrual caused by feasible differences in event rates, we use acalendar-based design, including calendar-based spending (Lan and DeMets (1989)).

We discuss some of the potential advantages and disadvantages of thecure model and calendar-based design cases where hazard rates for eventsdecrease substantially over time and the true underlying distributionsmay meaningfully deviate from what is anticipated at the time ofdesign.

The Poisson mixture model

The Poisson mixture model is a cure model that can be useful when thefailure rate in a population is expected to decline substantially overtime based on historical data. It also has the property that if controlgroup time-to-event follows a Poisson mixture distribution, then aproportional hazards assumption for treatment effect will yield anotherPoisson mixture distribution for the experimental group. The model isflexible and easy to use in that the control distribution is specifiedwith two parameters in a transparent fashion: the cure rate and oneother survival rate at an arbitrarily specified time point.

The Poisson mixture model (Rodrigues et al.(2009)) assumes a cure rate\(p\) to represent the patients who benefitlong-term. The survival function as a function of time\(t\) for a control group (\(c\)) is:

\[S_c(t)=\exp(-\theta(1-\exp(-\lambdat))),\] where\(\theta =-\log(p)\),\(\lambda> 0\) isa constant hazard rate and\(t\ge 0\).The component\(\exp(-\lambda t)\) isan exponential survival distribution; while it could be replaced with anarbitrary survival distribution on\(t>0\) for the mixture model, theexponential model is simple, adequately flexible and easy to explain.This two-parameter model can be specified by the cure rate and theassumed survival rate\(S_c(t_1)\) atsome time\(0 <t_1<\infty.\) Wecan solve for\(\theta\) and\(\lambda\) as follows:

\[S_c(\infty) = e^\theta \Rightarrow\theta = -\log(S_c(\infty)) \] and with a little algebra, we cansolve for\(\lambda\):\[S_c(t_1)= \exp(-\theta(1-\exp(-\lambda t_1)))\Rightarrow \lambda = -\log(1 + \log(S_c(t_1)) / \theta) / t_1\]We note that under a proportional hazards assumption with hazard ratio\(\gamma > 0\) the survival functionfor the experimental group (e) is:

\[S_e(t)=\exp(-\theta\gamma(1-\exp(-\lambdat))).\] For any setting chosen, it is ideal to be able to citepublished literature and other rationale for study assumptions and showthat the Poisson mixture assumptions for the control group reasonablymatch historical data.

Supporting functions

We create the following functions to support examples below.

Most readers should skip reviewing this code.

# Poisson mixture survivalpPM<-function(x =0:20,cure_rate = .5,t1 =10,s1 = .6) {  theta<--log(cure_rate)  lambda<--log(1+log(s1)/ theta)/ t1return(exp(-theta* (1-exp(-lambda* x))))}# Poisson mixture hazard ratehPM<-function(x =0:20,cure_rate = .5,t1 =10,s1 = .6) {  theta<--log(cure_rate)  lambda<--log(1+log(s1)/ theta)/ t1return(theta* lambda*exp(-lambda* x))}

Scenario assumptions

We consider three scenarios to demonstrate how spending can impactpotential for trial success and fully understanding treatment groupdifferences. The following can be adjusted by the reader and thevignette re-run.

# Control group assumptions for three Poisson mixture cure modelscure_rate<-c(.5, .35, .55)# Second time point for respective modelst1<-c(24,24,24)# Survival rate at 2nd time point for respective modelss1<-c(.65, .5, .68)time_unit<-"month"# Hazard ratio for experimental versus control for respective modelshr<-c(.7, .75, .7)# Total study durationstudy_duration<-c(48,48,56)# Number of bins for piecewise approximation of Poisson mixture ratesbins<-5

We will assume a constant enrollment rate for the duration ofenrollment, allowing different assumed enrollment durations by scenario.The following code can be easily changed to study alternatescenarios.

# This code should be updated by user for their scenario# Enrollment duration by scenarioenroll_duration<-c(12,12,20)# Dropout rate (exponential failure rate per time unit) by scenariodropout_rate<-c(.002, .001, .001)

Examples

The points in the following graph indicate where underlyingcumulative hazard matches the piecewise exponential of the specifiedcure rate models by scenario. The piecewise failure model is used toderive the sample size and targeted events over time in the trial.

We also evaluate the failure rate over time for scenario 1, which isused below in the design derivation. Note that the piecewise intervalsused to approximate changing hazard rates can be made arbitrarily smallto get more precise approximations of the above. However, given theuncertainty of the underlying assumptions, it is not clear that thisprovides any advantage.

Event accumulation

Based on the above model, we predict how events will accumulate basedon either the null hypothesis of no failure rate difference or thealternate hypothesis where events accrue more slowly in the experimentalgroup. We do this by scenario. We use as a denominator the final plannedevents under the alternate hypothesis for scenario 1.

Now we compare event accrual under the null and alternate hypothesisfor each scenario, with 100% representing the targeted final eventsunder scenario 1. The user should not have to update the code here. Forthe 3 scenarios studied, event accrual is quite different, creatingdifferent spending issues. As planned, the expected targeted eventfraction reaches 1 for Scenario 1 at 48 months under the alternatehypothesis. Under the null hypothesis for this scenario, expectedtargeted events are reached at approximately 36 months. For Scenario 2the expectation is that targeted events will be achieved in less than 24months under both the null and alternative hypotheses. Under Scenario 3,the expected events under the alternative do not reach the target evenby 60 months.

Study design

Design assumptions

We choose calendar-based timing for analyses as well as for spending.This is not done automatically by thegsSurv() function,but is done using thegsSurvCalendar() function. There aretwo thingsgsSurvCalendar() takes care of:

We begin by specifying calendar times of analysis and findcorresponding fractions of final planned events and calendar time underdesign assumptions. Having the first interim at 14 months rather than 12was selected to get the expected events well above 100.

# Calendar time from start of randomization until each analysis timecalendarTime<-c(14,24,36,48)

Now we move on to other design assumptions.

# Get hazard rate info for Scenario 1 control groupcontrol<- hazard%>%filter(Scenario==1, Treatment=="Control")# Failure rateslambdaC<- control$hazard_rate# Interval durationsS<- (control$Time- control$time_lagged)[1:(bins-1)]# 1-sided Type I erroralpha<-0.025# Type II error (1 - power)beta<- .1# Test type 6: asymmetric 2-sided design, non-binding futility boundtest.type<-6# 1-sided Type I error used for safety (for asymmetric 2-sided design)astar<- .2# Spending functions (sfu, sfl) and parameters (sfupar, sflpar)sfu<- sfHSDsfupar<--3sfl<- sfLDPocock# Near-equal Z-values for each analysissflpar<-NULL# Not needed for Pocock spending# Dropout rate (exponential parameter per unit of time)dropout_rate<-0.002# Experimental / control randomization ratioratio<-1

Study design and event accumulation

We now assume a trial is enrolled with a constant enrollment rateover 12 months trial duration of 48. As noted above, the eventaccumulation pattern is highly sensitive to the assumptions of thedesign. That is, deviations from plan in accrual, the hazard ratiooverall or over time as well as relatively minor deviations from thecure model assumption could substantially change the calendar time ofevent-based analysis timing. Thus, calendar-based timing and spending(Lan and DeMets (1989)) may have someappeal to make the timing of analyses more predictable. The main risk tothis would likely be under-accumulation of the final targeted events forthe trial. The targeted 4-year window may be considered clinicallyimportant as well as an important limitation for trial duration. Usingthe above predicted information fractions at 12, 24, 36, and 48 monthsto plan a calendar-based design. Calendar-based spending is likely togive more conservative interim bounds since the calendar fractions arelower than the information fractions in the text overlay of the plotafter the first interim: 10%, 20%, 40%, 60%, 80% and 100%,respectively.

We now set up a calendar-based design.

design_calendar<-gsSurvCalendar(calendarTime = calendarTime,spending ="calendar",alpha = alpha,beta = beta,astar = astar,test.type = test.type,hr = hr[1],R = enroll_duration[1],gamma =1,minfup = study_duration[1]- enroll_duration[1],ratio = ratio,sfu = sfu,sfupar = sfupar,sfl = sfl,sflpar = sflpar,lambdaC = lambdaC,S = S  )design_calendar%>%gsBoundSummary(exclude =c("B-value","CP","CP H1","PP"))%>%gt()%>%tab_header(title ="Calendar-Based Design",subtitle ="Calendar Spending"  )
Calendar-Based Design
Calendar Spending
AnalysisValueEfficacyFutility
IA 1: 36%Z2.9057-1.3967
N: 892p (1-sided)0.00180.9188
Events: 123~HR at bound0.59111.2875
Month: 14Spending0.00180.0812
P(Cross) if HR=10.00180.0812
P(Cross) if HR=0.70.17400.0004
IA 2: 67%Z2.7193-1.3968
N: 892p (1-sided)0.00330.9188
Events: 228~HR at bound0.69701.2037
Month: 24Spending0.00270.0428
P(Cross) if HR=10.00460.1240
P(Cross) if HR=0.70.49910.0004
IA 3: 88%Z2.3641-1.2250
N: 892p (1-sided)0.00900.8897
Events: 298~HR at bound0.76021.1527
Month: 36Spending0.00660.0416
P(Cross) if HR=10.01110.1656
P(Cross) if HR=0.70.76830.0004
FinalZ2.0027-1.0839
N: 892p (1-sided)0.02260.8608
Events: 337~HR at bound0.80391.1254
Month: 48Spending0.01390.0344
P(Cross) if HR=10.02500.2000
P(Cross) if HR=0.70.90000.0004

Considerations

There are a few things to note for the above design:

References

Jennison, Christopher, and Bruce W. Turnbull. 2000.Group SequentialMethods with Applications to Clinical Trials. Boca Raton, FL:Chapman; Hall/CRC.
Lan, K. K. G., and David L. DeMets. 1989.“Group SequentialProcedures: Calendar Versus Information Time.”Statistics inMedicine 8: 1191–98.
Liu, Qing, and Keaven M Anderson. 2008.“On Adaptive Extensions ofGroup Sequential Trials for Clinical Investigations.”Journalof the American Statistical Association 103 (484): 1621–30.
Rodrigues, Josemar, Vicente G Cancho, Mário de Castro, and FranciscoLouzada-Neto. 2009.“On the Unification of Long-Term SurvivalModels.”Statistics & Probability Letters 79 (6):753–59.

[8]ページ先頭

©2009-2025 Movatter.jp