Saturday, September 16, 2017

Randomized withdrawal design and delayed start design in area disease clinical trials

Last week at the 4th Duke-Industry Statistics Workshop, I participated in a session "Innovative clinical trial designs" organized by the UNC Professor, Dr Anastasia Ivanova.

I discussed 'randomized withdrawal design and delayed start design in rare disease clinical trials'. The presentation slides are included in the link below.


In rare disease clinical trials, we should think about more when designing the clinical trial and should

go beyond the typical, conventional RCT (randomized, controlled trials).

In the presentation, the following materials were referenced:

Randomized Withdrawal Design or Randomized Discontinuation Trial:

Monday, August 07, 2017

Another Way for Constructing Stopping Rule for Safety Monitoring of Clinical Trials

In my previous post "Constructing stopping rule for safety monitoring", I discussed the use of exact binomial confidence interval as a way to construct a stopping rule, but it was for a single arm study. 

For randomized, controlled study, the similar way can be used, but we have to calculate the exact confidence interval for the difference of two binomial proportions. We can then make a judgment if there is an excessive risk or elevated risk in the experimental arm for stopping the study for the safety reason. 

I recently read a oncology study protocol and noticed the following languages to describe the stopping criteria:  
An independent DMC will review accumulating safety data at scheduled intervals  with attention focused on the percentage of subjects with SAEs, AEs of particular concern, Grade 3 or 4 toxicities, and any Grade 5 toxicity considered at least possibly related to study treatment. Excess risk will be determined according to the lower 97.5% exact lower confidence bound on the difference between incidence rates for Group B minus Group A; a lower bound greater than 0% will be flagged as a possible reason to stop the trial. Incidence calculations will depend on the respective numerators and denominators at the time of each interim look. Wilson scores method will be used to calculate confidence limits.

To use this approach for stopping rule, we will need to calculate the exact confidence interval on a continuous basis. While Wilson scores method is mentioned for calculating the exact confidence interval, there are other methods for this calculation too.

In a paper by Will Garner (2007) Constructing Confidence Intervals for the Differences of Binomial Proportions in SAS, total 17 methods were discussed for calculating the confidence interval for two binomial proportions where a couple of methods could calculate the exact confidence interval. In SAS, proc freq can be used to calculate the exact confidence interval based on the method by Santner and Snell and the method by Chan and Zhang.

In the example below, I constructed a data set with two scenarios:
Scenario #1: 4 out of 10 patients in group A having an event and 0 out of 10 patients in group B having an event.
Scenario #2: 5 out of 10 patients in group A having an event and 0 out of 10 patients in group B having an event.

The lower bound of 95% confidence interval for scenario #1 and scenario #2 will be -0.0856 and 0.0179 respectively based on Santner-Snell exact method. Since the lower bound of 95% confidence interval for scenario #2 is greater than 0, the stopping rule for safety will be triggered.

data testdata;
 input trial treat $ x n alpha;
 datalines;
 1 A 4 10 0.05
 1 B 0 10 0.05
 2 A 5 10 0.05
 2 B 0 10 0.05
;
data testdat1;
 set testdata;
 by trial;
 if first.trial then treatn = 1;
 else treatn = 2;
 y = n - x; p = x/n; z = probit(1-alpha/2);
run;
data testdat2a(keep=trial x y z rename=(x=x1 y=y1));
 set testdat1;
 where treatn = 1;
run;
data testdat2b(keep=trial x y rename=(x=x2 y=y2));
 set testdat1;
 where treatn = 2;
run;
data testdat2;
 merge testdat2a testdat2b;
 by trial;
run;
proc transpose data = testdat1 out = x_data(rename=(_NAME_=outcome COL1=count));
 by trial treat;
 var x y;
run;
/* Methods 1, 6 (9.4 only), 10, 12, and 13 (9.4 only) */
ods output PdiffCLs=asymp1;
proc freq data=x_data;
 by trial;
 tables treat*outcome /riskdiff (CL=(WALD MN WILSON AC HA));
 weight count;
run;
data asymp1;
 set asymp1;
 length method $25.;
 if Type = "Agresti-Caffo" then method = "13. Agresti-Caffo";
 else if Type = "Hauck-Anderson" then method = "12. Hauck-Anderson";
 else if Type = "Miettinen-Nurminen" then method = " 6. Miettinen-Nurminen";
 else if index(Type,"Newcombe") > 0 then method = "10. Score, no CC";
 else if Type = "Wald" then method = " 1. Wald, no CC";
 keep trial method LowerCL UpperCL;
run;

/* Method 5: MEE (9.4 only) */

ods output PdiffCLs=asymp2;
proc freq data=x_data;
 by trial;
 tables treat*outcome /riskdiff(CL=(MN(CORRECT=NO)));
 weight count;
run;
data asymp2;
 set asymp2;
 length method $25.;
 method = " 5. Mee";
 keep trial method LowerCL UpperCL;
run;

/* Method 3: Haldane */
data asymp3;
 set testdat2;
 by trial;
 length method $25.;
 method = " 3. Haldane";
 p1 = x1/(x1+y1);
 p2 = x2/(x2+y2);
 psi = (x1/(x1+y1) + x2/(x2+y2))/2;
 u = (1/(x1+y1) + 1/(x2+y2))/4;
 v = (1/(x1+y1) - 1/(x2+y2))/4;
 w = z/(1+z*z*u)*sqrt(u*(4*psi*(1-psi)-(p1-p2)*(p1-p2)) + 2*v*(1-2*psi)*(p1-p2) +
4*z*z*u*u*(1-psi)*psi+z*z*v*v*(1-2*psi)*(1-2*psi));
 theta = ((p1-p2)+z*z*v*(1-2*psi))/(1+z*z*u);
 LowerCL = max(-1,theta - w);
 UpperCL = min(1,theta + w);
 keep trial method LowerCL UpperCL;
run;
/* Method 4: Jeffreys-Perks */
data asymp4;
 set testdat2;
 by trial;
 length method $25.;
 method = " 4. Jeffreys-Perks";
 p1 = x1/(x1+y1);
 p2 = x2/(x2+y2);
 psi = ((x1+0.5)/(x1+y1+1) + (x2+0.5)/(x2+y2+1))/2; /* Same as Haldane, but +1/2
success and failure */
 u = (1/(x1+y1) + 1/(x2+y2))/4;
 v = (1/(x1+y1) - 1/(x2+y2))/4;
 w = z/(1+z*z*u)*sqrt(u*(4*psi*(1-psi)-(p1-p2)*(p1-p2)) + 2*v*(1-2*psi)*(p1-p2) +
4*z*z*u*u*(1-psi)*psi+z*z*v*v*(1-2*psi)*(1-2*psi));
 theta = ((p1-p2)+z*z*v*(1-2*psi))/(1+z*z*u);
 LowerCL = max(-1,theta - w);
 UpperCL = min(1,theta + w);
 keep trial method LowerCL UpperCL;
run;
/* Method 16: Brown and Li's Jeffreys Method */
data asymp5;
 set testdat2;
 by trial;
 length method $25.;
 method = "16. Brown-Li";
 p1 = (x1+0.5)/(x1+y1+1);
  p2 = (x2+0.5)/(x2+y2+1);
 var = p1*(1-p1)/(x1+y1) + p2*(1-p2)/(x2+y2);
 LowerCL = max(-1,(p1-p2) - z*sqrt(var));
 UpperCL = min(1,(p1-p2) + z*sqrt(var));
 keep trial method LowerCL UpperCL;
run;
data asymp;
 set asymp1
 asymp2
 asymp3
 asymp4
 asymp5
 ;
run;
/* Methods 2 and 11 */
ods output PdiffCLs=asymp_cc;
proc freq data=x_data;
 by trial;
 tables treat*outcome /riskdiff(correct CL=(wald wilson));
 weight count;
run;
data asymp_cc;
 set asymp_cc;
 length method $25.;
 if index(Type,"Newcombe") > 0 then method = "11. Score, CC";
 else if index(Type,"Wald") > 0 then method = " 2. Wald, CC";
 keep trial method LowerCL UpperCL;
run;
/* Exact methods: Methods 14 and 15 (Exact) */
ods output PdiffCLs=exact_ss;
proc freq data=x_data;
 by trial;
 tables treat*outcome /riskdiff(cl=(exact));
 weight count;
 exact riskdiff;
run;
data exact_ss;
 set exact_ss;
 length method $25.;
 method = "14. Santner-Snell";
 keep trial method LowerCL UpperCL;
run;

data exact;
 set exact_ss;
run;

/* Combine all of the outputs together */
data final;
 set asymp asymp_cc exact;
run;
/* Sort all of the outputs by trial and method */
proc sort data = final out = final;
 by trial method;
run;

proc print data=final;
 title "Methods and 95% Confidence Interval for Difference between two rates";
run;

Tuesday, August 01, 2017

Steroid Tapering Design Clinical Trials

In the most recent issue of New England Journal of Medicine, Stone et al published the results from "Trial of Tocilizumab in Giant-Cell Arteritis". The study used a steroid tapering design with the primary efficacy endpoint of "the rate of sustained glucocorticoid-free remission at week 52 in each tocilizumab group as compared with the rate in the placebo group that underwent the 26-week prednisone taper."

There are some chronic diseases where the effective treatment is the high dose of steroid (corticosteroid, prednisone,...). To control the symptoms, the patients are usually put on the long-term use of the high dose steroid. While the steroid treatment may be effective, it can cause serious, irreversible side effects.

The list of side effects of long-term steroid use includes, but not limited to:
  • mood changes 
  • forgetfulness 
  • hair loss 
  • easy bruising 
  • a tendency toward high blood pressure and diabetes 
  • thinning of the bones (osteoporosis)
  • suppression of the adrenal glands
  • muscle weakness
  • weight gain
  • cataracts 
  • glaucoma

It will be useful to develop an alternative treatment that can replace the long-term steroid use or at least minimize the steroid dose required. To investigate the effect of the alternative treatment, clinical trial can be designed to demonstrate if the alternative treatment can taper down the steroid dose to very low or zero level while maintaining the stabilized symptoms – we call this as steroid tapering or steroid sparing design.

In a steroid tapering design, the purpose of the study is not to pursue the further improvement in disease symptoms. The steroid tapering design will have a study endpoint based on the reduction in the steroid dose while maintaining the stabilized symptoms. The possible efficacy endpoints could be the following:
  • Steroid dose reduction at Week xx from baseline
  • Percent of subjects with zero steroid dose at Week xx
  • Percent of subjects with steroid dose less than xx mg at Week xx
  • Percent of subjects with steroid dose reduction greater than and equal to 50%
  • AUC for steroid dose between week x to week y

In one of studies investigating the steroid tapering effect of IGIV in generalized myasthenia gravis, FDA confirmed during the pre-IND meeting that the treatment effect in reducing the steroid dose is meaningful.  This study is sponsored by Grifols and is currently ongoing. as indicated in clinicaltrials.gov, the sponsor chose "the percent of subjects with steroid dose reduction greater than and equal to 50%" as the primary efficacy endpoint. 
Efficacy and Safety of IGIV-C in Corticosteroid Dependent Patients With Generalized Myasthenia Gravis
When designing a steroid tapering trial, the following issues need to be addressed:
  • Steroid tapering design has a wash-in, wash-out feature. With the effect of new treatment kicking in (if the active treatment is effective), the dose of the steroids will be reduced.
  • The purpose of the study is not the improvement in disease symptoms. The purpose is to maintain the symptoms (no deterioration) while the steroid dose is reduced. 
  • Considering the withdrawal effect of the steroid, steroid tapering design will therefore include a run-in period – the early period when the new treatment added, but steroid tapering has not started yet. To ensure the patient safety, the steroid dose tapering will only start at the end of the run-in period. During the run-in period,
  • Changes / reductions in steroid dose could influence outcomes; The treatment effect of steroid reduction must be established on the maintenance of the disease symptoms. There should be a rule to define the worsening of the clinical symptoms when the tapering must be slowed or stopped. There must be a standardized steroid tapering regimen and standardized rescue measure when disease symptoms exacerbated due to the steroid tapering.
  • Subjects who entered into the study and before the randomization should have a stable steroid dose. If the patients are not on stable steroid dose while entering the study, at the end of the study, it is not possible to tease out if the steroid dose reduction is due to the fluctuation of the steroid dose itself or due to the effect of the new treatment.
  • The stratified randomized can be used to include the baseline steroid dose category as a stratification factor to ensure that within each steroid dose category, equal number of subjects are randomized into active treatment or placebo control. Patients on higher steroid dose at baseline are more likely to have steroid dose reduction. The stratified randomization can minimize the biases due to this.
  • If the endpoint is “the mean change from baseline in steroid dose”, the magnitude of the steroid reduction between two treatment group needs to be clinically meaningful.
  • In steroid tapering design, there must be a rescue plan in the case of symptom worsening / deterioration (or exacerbation) due to the decrease in steroid dose.  
  • At the end of the study, there should be a safety follow-up period. 

There is a FDA Guidance for Industry Systemic Lupus Erythematosus — Developing Medical Products for Treatment where the steroid tapering design is proposed. 
d. Reduction in concomitant steroids Reducing corticosteroid use is an important goal in treatment of patients with SLE if it occurs in the context of a treatment that effectively controls disease activity. Therefore, for a medical product to be labeled as reducing corticosteroid usage, it should also demonstrate another clinical benefit, such as reduction in disease activity as the primary endpoint. In an add-on trial to test the steroid-sparing potential of a new medical product, patients should be enrolled during a flare and randomized to the addition of the new medical product or placebo to induction doses of corticosteroids. In both study arms, when patients achieve quiescent disease, the corticosteroid dose should be tapered to a maintenance dose that is not usually associated with major toxicities while still maintaining quiescence. The induction steroid dosage and duration of induction therapy and taper schedule should be based on the severity of disease activity in the dominant organ system involved.8 The evaluation of efficacy should be based on the proportion of patients in treatment and control groups that achieve a reduction in steroid dose to less than or equal to 10 mg per day of prednisone or equivalent, with quiescent disease and no flares (see definition above) for at least 3 consecutive months during a 1-year clinical trial. For a result to be clinically meaningful, the patient population should be on moderate to high doses of steroids at baseline. Trials should also assess the occurrence of clinically significant steroid toxicities.

The steroid tapering design can be used in various disease areas, the following examples are the application of steroid tapering design in severe refractory asthma, myasthenia gravis, systemic lupus erythmatosus, and giant cell arteritis (GCA). 

The primary measure of efficacy in our study will be the nine-month prednisone AUC (months 3–12), which measures the total prednisone doses of each patient in nine months. A reduction of prednisone AUC demonstrates that patients improved on clinical grounds so that the prednisone dose could be decreased. If the patients receiving MTX have a smaller prednisone AUC compared to the placebo patients, this will have demonstrated the efficacy of MTX. 
Based on pre-IND discussions with FDA and consultants, it was decided that the primary efficacy variable for the corticosteroid reduction study should be, for patients who were corticosteroid dependent, a reduction of the patients’ current prednisone dose to 7.5 mg/day (upper limit of physiologic levels) or less, without worsening of SLE.
The design of the steroid sparing study was a forced titration; i.e., the patient’s steroid dose at each monthly visit was to be reduced, by algorithm, if her disease activity was stable or improved. However, when a patient worsened or flared, the associated increase in corticosteroid dose, if any, required to treat the patient’s exacerbation was at the physician’s discretion and not by algorithm. The steroid reduction algorithm was based on the patient’s disease activity improving or being stable, which was defined as no change in or a decrease in SLEDAI score in comparison to her previous visit. As such, one of the issues discussed at the pre-study investigator meeting was whether patients with low SLEDAI scores, and especially those with SLEDAI scores of 0, should be enrolled into the study. There was concern that those patients with low SLEDAI scores had inactive disease, and therefore would not be affected by steroid reduction, i.e., might not be steroid dependent. However, some investigators and consultants felt that if patients were truly dependent on steroids, their low SLEDAI scores represented active disease suppressed by corticosteroids, which would worsen or flare as soon as their corticosteroids were reduced. Therefore, because there was no experience with such trials, it was decided not to exclude patients with low SLEDAI scores. The concern regarding enrollment of potentially inactive SLE patients was revisited prior to unblinding of the study. In addition it was recognized that because of the forced downward titration of steroid dose as the patients’ disease improved or remained stable, other evaluations of disease activity such as SLEDAI, etc., would not be expected to improve.
The pivotal study was designed as a double-blind, randomized, placebo-controlled, parallel group trial to evaluate GL701 100 and 200 mg/day versus placebo in female patients with mild to moderate prednisone-dependent systemic lupus erythematosus (SLE).
The study included two primary efficacy variables. The first one was responder rate. A responder was defined as a patient with the achievement of a decrease in prednisone dose to 7.5 mg/day or less sustained for no less than three consecutive scheduled visits, including the termination visit (i.e., two consecutive months), on or after Visit 7. The second primary efficacy variable was percent decrease in prednisone dose determined by comparing the prescribed prednisone (or steroid equivalent) dose at Baseline (Qualifying Visit) and the last visit prednisone dose using the physician prescribed prednisone dose recorded on the Medication Record Form.
  • Elimumab May Have Potential As A Corticosteroid-Sparing Drug When Added To Standard-Of-Care Treatment For SLE, Research Suggests.
Research suggests “the monoclonal antibody belimumab (Benlysta) may have potential as a corticosteroid-sparing drug when added to standard-of-care treatment for systemic lupus erythematosus (SLE).” Investigators found, “in pooled data from two large randomized controlled trials,” that “this blocker of B-lymphocyte stimulator was moderately associated with a higher probability of corticosteroid dose reduction and a greater average dose reduction over” one year. The findings were published in Arthritis & Rheumatology.
This paper described the design and operationalization of a blinded corticosteroid-tapering regimen for a randomized trial of tocilizumab in giant cell arteritis (GCA). The study design is sketched in the diagram below. The primary efficacy endpoint is “Proportion of patients in sustained remission at week 52 following induction and adherence to the protocol-defined prednisone taper regimen”



Monday, July 24, 2017

Excel spreadsheet to calculate the p-value for Fisher Exact test

A colleague of mine wants to have a small program to calculate the p-values during the course of the study for a randomized, open label study. For those who don't want to use any statistical software such as SAS, an excel spreadsheet can serve the purpose.

                 Calculating p-value for Fisher exact test

However, this is not a good practice and should be discouraged or prehibited. Even though this is a randomized, open label study, the study team should be restrained from continuously performing the testing based on the cumulative data.




Monday, July 10, 2017

Age Group in Pediatric, Perinatal or Preterm, and Geriatric Subjects

In a previous blog article, I discussed the “Pediatric use and geriatric use of drug and biological products”:
For Pediatric population: according to ICH guidance E11 "Clinical Investigation of Medicinal Products in the Pediatric Population", the pediatric population contains several sub-categories:
  • preterm newborn infants
  • term newborn infants (0 to 27 days)
  • infants and toddlers (28 days to 23 months)
  • children (2 to 11 years)
  • adolescents (12 to 16-18 years (dependent on region))
Notice that in FDA's guidance "General Considerations for Pediatric Pharmacokinetic Studies
for Drugs and Biological Products
for Drugs and Biological Products", the age classification is a little bit different. I am assuming that the ICH guidance E11 should be the correct reference.
Geriatric population:
Geriatric population is defined as persons 65 years of age and older. There is no upper limit of age defined.
Recently, I run into several studies where the age group needs to be further split.

Pediatric population can be further divided into 2-5 years old, 6-11 years old, and 12-16 years old.
This grouping of the pediatric population was used in a pharmacokinetic study in primary immunodeficiency patients in child. Per FDA’s request, the pediatric patients are further divided into groups 2-5, 6-11, and 12-16 years old. FDA asked that the study should contain subjects in each of the sub-groups so that the assessment can be made to see if the pharmacokinetics are consistent across different pediatric sub-groups (or at least there should be no obvious difference among these sub-groups).
An Open-label, Single-sequence, Crossover Study to Evaluate the Pharmacokinetics, Safety and Tolerability of Subcutaneous GAMUNEX®-C in Pediatric Subjects With Primary Immunodeficiency
The age definition in perinatal period is much trickier. In a policy statement by Committee on Fetus and Newborn “Age Terminology Duringthe Perinatal Period”, various definitions of age are described:
  • Gestational age (or “menstrual age”) is the time elapsed between the first day of the last normal
  • menstrual period and the day of delivery
  • “Chronological age” (or “postnatal” age) is the time elapsed after birth
  • Postmenstrual age (PMA) is the time elapsed between the first day of the last menstrual period and birth (gestational age) plus the time elapsed after birth (chronological age). Postmenstrual age is usually described in number of weeks and is most frequently applied during the perinatal period beginning after the day of birth.
In clinical trials with pre-term babies, the study endpoints are usually defined based on the post-menstrual age (PMA). For example, in an article by Bassler et al “Early inhaled budesonide for the prevention ofbronchopulmonary dysplasia”, the primary outcome was a composite of death
or bronchopulmonary dysplasia at 36 weeks of postmenstrual age. While the study treatment will not be given until after the birth, 36 weeks of postmenstrual age is counted from the first day of the last menstrual period. For different babies, the chronological age or observation period (when the death or BPD event is observed) will be different depending on the actual gestational age.  

It can be illustrated in the following diagram.

  • Gestational age = birth date – the first day of the last normal menstrual period
  • Chronological age = assessment date or event date – birth date
  • Postmenstrual age = assessment date or event date – the first day of the last menstrual period
  • Postmenstrual age = gestational age + chronological age

Geriatric population can be further divided: 
In a study with Interstitial Lung Diseases (ILDs) in elderly patients, there are substantial number of patients with age >= 80 years old. The traditional definition of geriatric population using 65 years old as a cut point will not be sufficient. We end up further dividing the geriatric population into 65 - < 80 years old and >= 80 years old. We think this will provide more meaningful sub-grouping to assess the impact of the age group in ILD indication. 

Monday, July 03, 2017

(Bayes) Success Run Theorem for Sample Size Estimation in Medical Device Trial

In a recent discussion about the sample size requirement for a clinical trial in a medical device field, one of my colleagues recommended an approach of using “success run theorem” to estimate the sample size. ‘Success run theorem’ may also be called ‘Bayes success run theorem’. In process validation field, it is a typical method based on a binomial distribution that leads to a defined sample size.  

Application of success run theorem depends on the reliability of the new process (or new device). In medical device trials, the reliability is the probability that an item (i.e. the device) will carry out its function satisfactorily for the stated period when used according to the specified conditions. A reliability of 95% means that a medical device will be functional without problem for 95% of times.

With the success run theorem, we will calculate the sample size so that we have 95% confidence interval to run the device without failure (reliability). Usually, people use 95% confidence interval to achieve 95% reliability. With ‘success run theorem’, the sample size can be calculated as:

                                 N = ln(1-C)/ln( R)

Where N is the sample size needed, C is the confidence interval, and R is the reliability.

With typical 95% confidence interval to achieve 95% reliability, a sample size of 57 will be needed. An excel spreadsheet is built for calculating the sample size using success run theorem.   

The website below contains the explanation how the success run theorem formula is derived. With C = 1 – R^(n+1), we would have N = [ln(1-C)/ln(R)] – 1, slightly different from the formula above.
 How do you derive the Success-Run Theorem from the traditional form of Bayes Theorem?
This derivation above is based on uniform prior for reliability (a conservative assumption) which assumes no information from predicate devices and the same weight to every reliability value to fall anywhere between 0 to 1.

In medical device field, devices evolve and they are constantly being improved. When we evaluate a new device or next generation device, there is usually some prior information that can be based on. Therefore, instead of uniform prior for reliability, Bayesian technique with mixture of beta priors for reliability can be applied. Using mixtures of beta priors for reliability, we will be able to incorporate historical information from predicate device to decrease the sample size requirement.

We have seen this application in the field of automotive electronics attribute testing, but have not seen any application in FDA regulatory medical device testing.

References:  


Saturday, June 17, 2017

Statistical Analysis Plan in Clinical Trial Registries

Recently, a question comes up when I search the clinicaltrialregistry.eu – the EU clinical trial registry website – the counterpart of clinicaltrials.gov in US. Should the clinical trial registries include the statistical analysis plan (for primary and secondary efficacy endpoints)? The statistical analysis plan could include the statistical methods for primary and secondary endpoints, missing data handling, stopping rule for early termination of the study, justification for sample size estimation, and so on.
For clinicaltrials.gov in US, Protocol Registration Data Element Definitions for Interventional and Observational Studies requires the inclusion of some details about statistical analyses:
Detailed Description Definition:
Extended description of the protocol, including more technical information (as compared to the Brief Summary), if desired. Do not include the entire protocol; do not duplicate information recorded in other data elements, such as Eligibility Criteria or outcome measures. Limit: 32,000 characters. 
For Patient Registries: Also describe the applicable registry procedures and other quality factors (for example, third party certification, on-site audit). In particular, summarize any procedures implemented as part of the patient registry, including, but not limited to the following: 
  • Quality assurance plan that addresses data validation and registry procedures, including any plans for site monitoring and auditing.
  • Data checks to compare data entered into the registry against predefined rules for range or consistency with other data fields in the registry.
  • Source data verification to assess the accuracy, completeness, or representativeness of registry data by comparing the data to external data sources (for example, medical records, paper or electronic case report forms, or interactive voice response systems).
  • Data dictionary that contains detailed descriptions of each variable used by the registry, including the source of the variable, coding information if used (for example, World Health Organization Drug Dictionary, MedDRA), and normal ranges if relevant.
  • Standard Operating Procedures to address registry operations and analysis activities, such as patient recruitment, data collection, data management, data analysis, reporting for adverse events, and change management.
  • Sample size assessment to specify the number of participants or participant years necessary to demonstrate an effect.
  • Plan for missing data to address situations where variables are reported as missing, unavailable, non-reported, uninterpretable, or considered missing because of data inconsistency or out-of-range results.
  • Statistical analysis plan describing the analytical principles and statistical techniques to be employed in order to address the primary and secondary objectives, as specified in the study protocol or plan. 
In EU, the clinicatrialregistry.eu is mainly based on the EudraCT database. As part of the clinical trial application (similar to IND in US), the sponsor needs to provide the clinical trial protocol information to be entered into EudraCT database.

In the guidance “Detailed guidance on the European clinical trials database (EUDRACT Database)”, it asks for the information regarding the clinical trial design, but there is no mention of the statistical analysis plan.

As a matter of fact, all clinical trial registries across different countries are supposed to meet the requirements by International Clinical Trials Registry Platform (ICTRP) from World Health Organization. In the list of elements for WHO Trial Registration Data Set , there is no mention of statistical analysis plan as part of the registration elements.

No matter what, there seems to be different understanding about the details of the clinical trial to be posted in clinical trial registries. Some companies posted very detail information including how the clinical trial data would be analyzed. Other companies were very restraint and posted as little information as possible.

In terms of the elements regarding the statistical analyses, there are actual more studies in clinicaltrialregistry.eu with some details than studies in clinicaltrials.gov even though the requirement regarding the inclusion of the statistical analysis plan is mentioned in clinicaltrials.gov, not in clinicaltrialregistry.eu. For example, in a study “A Multicenter, Randomized, Double-Blind, Phase 3 Study of Ramucirumab (IMC-1121B) Drug Product and Best Supportive Care (BSC) Versus Placebo and BSC as Second-Line Treatment in Patients With Hepatocellular Carcinoma Following First-Line Therapy With Sorafenib”, a lot of details about the statistical analyses are provided in the clinicaltrialregistry.eu.

When I try to see if the interim analysis and its corresponding boundary method are mentioned in clinicaltrialregistry.eu, I can clearly see the inconsistencies across different trial sponsors.

Here are some studies that the interim analysis and boundary method are mentioned.
Here are some studies that the interim analysis is mentioned, but the boundary method is not.


Sunday, June 04, 2017

Calculating exact confidence interval for binomial proportion within each group using the Clopper-Pearson method

Clopper-Pearson confidence interval is commonly used in calculating the exact confidence interval for binomial proportion, incidence rate,... The confidence interval is calculated for a single group, therefore Clopper-Pearson method is not for calculating the confidence interval for the difference between two groups. 

In many oncology studies where there is no concurrent control group. For response rate, The exact confidence interval will be constructed (usually through Clopper-Pearson method) and then the lower limit of the 95% confidence interval is compared with the historical rate to determine if there is a treatment effect. 

Here are some examples that Clopper-Pearson method was used to calculate the exact confidence interval: 

Medical and statistical review for Venetoclax NDA:
"For the primary efficacy analyses, statistical significance was determined by a two-sided p value less than 0.05 (one-sided less than 0.025). The assessment of ORR was performed once 70 subjects in the main cohort completed the scheduled 36-week disease assessment, progressed prior to the 36-week disease assessment, discontinued study drug for any reason, or after all treated subjects discontinued venetoclax, whichever was earlier. The ORR for venetoclax was tested to reject the null hypothesis of 40%. If the null hypothesis is rejected and the ORR is higher than 40%, then venetoclax has been shown to have an ORR significantly higher than 40%. The ninety-five percent (95%) confidence interval for ORR was based on binomial distribution (Clopper-Pearson exact method). "
Motzer et al (2015) Nivolumab versus Everolimus in Advanced Renal-Cell Carcinoma
"If superiority with regard to the primary end point was demonstrated, a hierarchical statistical testing procedure was followed for the objective response rate (estimated along with the exact 95% confidence interval with the use of the Clopper–Pearson method)"
Foster et al (2015) Sofosbuvir and Velpatasvir for HCV Genotype 2 and 3 Infection
"Point estimates and two-sided 95% exact confidence intervals that are based on the Clopper–Pearson method are provided for rates of sustained virologic response for all treatment groups, as well as selected sub-groups."
Cicardi et al (2010) Icatibant, a New Bradykinin-Receptor Antagonist, in Hereditary Angioedema
"Fisher’s exact test, with 95% confidence intervals calculated for each group by means of the Clopper–Pearson method, was used to compare the percentage of patients with clinically significant relief of the index symptom at 4 hours after the start of the study drug. Two-sided 95% confidence intervals for the difference in proportions were calculated with the use of the Anderson–Hauck correction."
According to SAS manual, the Clopper-Pearson confidence interval is described as below:
The confidence interval using Clopper-Pearson method can be easily calculated with SAS Proc Freq procedure. Alternatively, it can also be calculated directly using the formula or using R function. 

Using Venetoclax NDA as an example, the primary efficacy endpoint ORR (overall response rate) is calculated as 85 / 107 = 79.4. 95% confidence interval can be calculated using Clopper-Pearson method as following: 

Using SAS Proc Freq:  
With proc freq, we should get 95% confidence interval of 70.5 – 88.6.

data test2;
  input orr $ count @@;
datalines;
have 85
no 22
;

proc freq data=test2 order=data;
  weight count;
  tables orr/binomial(exact) alpha=0.05 ;
run;

Using formula:

data test;
  input n n1 alpha;
  phat = n1/n;
  fvalue1 = finv( (alpha/2), 2*n1, 2*(n-n1));
  fvalue2 = finv( (1-alpha/2), 2*(n1+1), 2*(n-n1));
  pL =  (1+   ((n-n1+1)/(n1*fvalue1) ))**(-1);
  pU =  (1+   ((n-n1)/((n1+1)*fvalue2) ))**(-1);
datalines;
107 85 0.05
;

proc print;

      run;

Using R: 
n=107
n1=85
alpha=0.05
f1=qf(1-alpha/2, 2*n1, 2*(n-n1+1), lower.tail=FALSE)
f2=qf(alpha/2, 2*(n1+1), 2*(n-n1), lower.tail=FALSE)
pl=(1+(n-n1+1)/(n1*f1))^(-1)
pu=(1+(n-n1)/((n1+1)*f2))^(-1)
f1
f2
pl
pu

Saturday, May 27, 2017

Clinical Trial with Insufficient Sample Size: under power or detecting a trend?

When planning for a clinical trial, an important step is to estimate the sample size (the of patients needed to detect the treatment difference) for the study. In calculating the sample size, it is conventional to have significant level set at 0.05 and statistical power set at 80% or above. Sometimes, we need to design a clinical trial with insufficient sample size. This occurs pretty often in early phase clinical trials, in investigator initiated trials (IITs), and in rare disease drug development process due to the constraints in resource, budget, and available patients who can participate in the study. We could design a study without formal sample size calculation and we would simply state that the sample size of xxx is from the clinical consideration even though we don’t what it means exactly ‘the clinical consideration’.

If there are biomarkers or surrogate endpoints and treatment effects for biomarkers and surrogate endpoints are easier to detect than the clinical endpoints, we could design a proof-of-concept study or early phase study using the biomarkers or surrogate endpoints. The sample size can be formally calculated based on the treatment effect in biomarkers or surrogate endpoints. For example, in solid tumor clinical trials, we could design a study with smaller sample size based on the effect in shrinking the tumor size. In studies of inhaled antibiotics in non-CF Bronchiectasis, the early phase study could use the sputum density of the bacteria count as the endpoint so that the smaller sample size is required to demonstrate the effect before the late stage study where the clinical meaningful endpoint such as exacerbations should be used.

We can run into the situation where there is no good or reliable biomarkers or surrogate endpoints and the clinical endpoint is the only one available. The endpoint for the early phase study and the late phase study is the same. In order to design an early phase study with smaller sample size, we will need to do one of the followings:
  • Increase the significant level (alpha level) to allow greater type I error. Instead of testing the hypothesis at the conventional alpha = 0.05, we can test the hypothesis at alpha = 0.10 or 0.20 – we would say that we are trying to detect a trend. 
  • Lower the statistical power to allow greater type II error – design an underpowered study.

While both approaches have been used in literature, I would prefer the approach with increasing the significant level to detect a trend. Intentionally designing an underpowered study seems to have the ethical concern.

Here are some examples that the clinical trial is to detect a trend using alpha = 0.20 (or one-sided alpha=0.10):

“…It was estimated that for the study to have 90% power to test the hypothesis at a one-sided 0.10 significance level, the per-protocol population would need to include 153 participants in each group. The failure rate was estimated with binomial proportion and 95% confidence intervals. One-sided 90% exact confidence intervals were used to estimate the difference in the failure rates between the two treatments, which is appropriate for a noninferiority study and which is consistent with the one-sided significance level of 0.10 that was used for the determination of the sample size. “

 “A three-outcome (promising, inconclusive, not promising), one-stage modified Simon optimal phase II clinical trial study design with an interim analysis was chosen so that there would be a 90% chance of detecting a tumor response rate of at least 20% when the true tumor response rate was at least 5% at a 0.10 significance level, deeming that a RECIST response rate of less than 20% would be of little clinical importance in ATC.”

 “Assuming a mPFS of 3.5 months for GP and 5.7 months for GV (HR=0.61), a sample size of 106 subjects (53 per group) provided 85% power to detect this difference, using a one-sided test at the 0.10 significance level.”

 “The primary null hypothesis was that CoQ10 reduces the mean ALSFRSr decline over 9 months by at least 20% compared to placebo—in short, that CoQ10 is “promising.” It was tested against the alternative that CoQ10 reduces the mean ALSFRSr decline by less than 20% over 9 months compared to placebo, at one-sided alpha = 0.10


Here are some studies with insufficient power (less than 80% power). Notice that these studies still have 70% power. I can't image people's reaction if we design a study with 50% power. 

"A detailed calculation of sample size was difficult, since few studies have evaluated medications intended to augment local osseous repair in periodontal therapy. However, in one study of a selective cyclooxygenase-2 inhibitor in periodontal therapy, a sample of 22 patients per group was sufficient for the study to have 70% power to detect a 1-mm difference between the groups in the gain in clinical attachment level and reduction in probing depth, with a type I error rate of 5%."
“We estimated that a sample size of 600 would provide at least 70% power to detect a 33% reduction in the rate of the composite of the following serious adverse fetal or neonatal outcomes”


“With the sample of 99 patients, the study would have 70% power at a two-sided significance level of 0.05” 

Thursday, May 04, 2017

Final Version of Protocol Template by FDA/NIH and TransCelerate

Previously, I discussed the protocol template for clinical trials. This week, FDA/NIH and TransCelerate simultaneously released the final version of the protocol template.

FDA/NIH's protocol template is intended for clinical investigators who are writing protocols for phase 2 and phase 3 NIH-funded studies requiring investigational new drug (IND) or investigational device exemption (IDE)  applications, but could also be helpful to other investigators conducting studies of medical products that are not regulated by FDA.

The final protocol template by TransCelerate is for industry-sponsored clinical trials for the licensure.


                            Word Version of Final Template

                            Common Protocol Template Core Template – Basic Word Edition
                            Common Protocol Template – Technology-Enabled Edition


REFERENCE: FDA, NIH & Industry Advance Templates forClinical Trial Protocols | RAPS

Monday, May 01, 2017

Betting on Death: Moral Dilemma

I recently re-read the book “What Money Can't Buy: The Moral Limits of Markets” by Michael J. Sandel. The example used in the book about the viatical industry and the moral dilemma associated with this make me think about the similar dilemma we are facing in the event-driven clinical trials where the event is unfortunate outcome (for example, morbidity  and mortality).
A viatical settlement (from the Latin "viaticum") is the sale of a policy owner's existing life insurance policy to a third party for more than its cash surrender value, but less than its net death benefit. Such a sale provides the policy owner with a lump sum. The third party becomes the new owner of the policy, pays the monthly premiums, and receives the full benefit of the policy when the insured dies.
"Viatical settlement" typically is the term used for a settlement involving an insured who is terminally or chronically ill.
The viatical industry started in the 1980s and 1990s, prompted by the AIDS epidemic. It consisted of a market in the life insurance policies of people with AIDS and others who had been diagnosed with a terminal illness. Here is how it worked: Suppose someone with a $100,000 life insurance policy is told by his doctor that he has only a year to live. And suppose he needs money now for medical care, or perhaps simply to live well in the short time he has remaining. An investor offers to buy the policy from the ailing person at a discount, say, $50,000, and takes over payment of the annual premiums. When the original policyholder dies, the investor collects the $100,000.
It seems like a good deal all around. The dying policyholder gains access to the cash he needs, and the investor turns a handsome profit – provided the person dies on schedule.
With viaticals, the financial risk creates a moral complication not present in most other investments: the investor must hope that the person whose life insurance he buys dies sooner rather than later. The longer the person hangs on, the lower the rate of return.
The anti-HIV drugs that extended the lives of tens of thousands of people with AIDS scrambled the calculations of the viatical industry.
The viatical industry can extend to people with other terminal diseases such as cancer. However the concept is the same and the moral issues are the same: betting the people to die sooner than later.

In clinical trials with event-driven design where the event is bad such as death, cancer recurrence, pulmonary exacerbation, transplantation rejection,…), we may face the same dilemma. While the intention of new treatment is to prevent the bad event from happening, as the trial sponsor, we also hope that these bad events can occur more often so that we can finish the study early and have the study results available earlier.

Suppose there is a cancer clinical trial where the primary efficacy endpoint is time to death and suppose we design a randomized, double-blind study to compare two treatment groups: an experimental treatment group and a control group, we will calculate the sample size to see how many death events are needed to have at least 80% statistical power to show the treatment difference. Then based on the accrual rate and dropout rate, we can further calculate the number of subjects needed to have the desired number of death events. During the study, we can check the aggregate death rate to see if the actual results are in line with the assumptions. If the death rate is below our assumptions, we should be happy since the lower death rate could indicate the experimental treatment works, however, as the trial sponsor, we would not be happy since the lower death rate will indicate the longer trial to accrue the requirement number of death events.

As the trial sponsor, we may want to employ the enrichment strategies to select the population who may be likely to die therefore the death events can be accrued quickly. As mentioned in FDA’s guidance “Enrichment Strategies for Clinical Trials to Support Approval of Human Drugs and Biological Products”, this type of enrichment strategy is called prognostic enrichment. Here is what is said in FDA’s guidance:
IV. PROGNOSTIC ENRICHMENT STRATEGIES—IDENTIFYING HIGH-RISK 169 PATIENTS
A wide variety of prognostic indicators have been used to identify patients with a greater likelihood of having the event (or a large change in a continuous measure) of interest in a trial. These indications include clinical and laboratory measures, medical history, and genomic or proteomic measures. Selecting such patients allows a treatment effect to be more readily discerned. For example, trials of prevention strategies (reducing the rate of death or other serious event) in cardiovascular (CV) disease are generally more successful if the patients enrolled have a high event rate, which will increase the power of a study to detect any given level of risk reduction. Similarly, identification of patients at high risk of a particular tumor, or at high risk of recurrence or metastatic disease can increase the power of a study to detect an effect of a cancer treatment. Prognostic enrichment strategies are also applicable, or potentially applicable, to the study of drugs intended to delay progression of a variety of diseases, such as Alzheimer’s disease, Parkinson’s disease, rheumatoid arthritis, multiple sclerosis, and other conditions, where patients with more rapid progression could be selected; it is possible, of course, that such patients might be less responsive to treatment (i.e., that rapid progression would be a negative predictor of response), and that would have to be considered.  
For any given desired power in an event-based study, the appropriate sample size will depend on effect size and the event rate in the placebo group. Prognostic enrichment does not increase the relative risk reduction (e.g., percent of responders or percent improvement in a symptom), but will increase the absolute effect size, generally allowing for a smaller sample size. For example, reduction of mortality from 10% to 5% in a high-risk population is the same relative effect as a reduction from 1% to 0.5% in a lower risk population, but a smaller sample size would be needed to show a 5% vs. 0.5% change in absolute risk. It is common to choose patients at high risk for events for the initial outcome study of a drug and, if successful, move on to larger studies in lower risk patients.

While this enrichment strategy is good for the trial sponsor and makes the clinical trial smaller, it also gives a bad taste because we are betting that the selected study population will have high death rate and that the patient die sooner.

Friday, April 28, 2017

CURRENT ISSUES REGARDING DATA AND SAFETY MONITORING COMMITTEES IN CLINICAL TRIALS

Last week, I attended the 10th Annual Conference on Statistical Issues in Clinical Trials held in the camps of the University of Pennsylvania. This year, the topic is about "Current Issues Regarding Data and Safety Monitoring Committees in Clinical Trials".

The conference is supposed to discuss the current issues and emerging challenges in the practice of data monitoring committees, however, these issues and challenges discussed are not new. These issues are known for long time and remain as issues.

One thing I like this one-day annual conference is that all presentation slides including the panel discussions are posted online. For DMC discussions this year, the presentation slides are available at: www.cceb.med.upenn.edu/events/10th-annual-conference-statistical-issues-clinical-trials

The early days, the term DSMB (data and safety monitoring board) was used. After FDA issued its guidance "The Establishment and Operation of Clinical Trial Data Monitoring Committees for Clinical Trial Sponsors", the term DMC (data monitoring committee) become popular. In the conference this year, a new term DSMC (data and safety monitoring committee) is also used.

There are several talks about the training for DMC members and the need to train more people who can serve on the data monitoring committee. I felt that the discussion about the training for DMC members is targeting the wrong audience. Instead of targeting the statisticians, the focus should be on those MDs in the medical field. Very often, we have difficulties to find the MDs who can serve on the data monitoring committee, let alone to fine the MDs who have prior experience serving in the data monitoring committee. For a large scale clinical trial, there may be several committees: steering committee, event or endpoint adjudication committee, and data monitoring committee. There is often a shortage of members for these committees. 

Nowadays, the study protocols are getting more and more complicated. The DMC committee members may not understand the complexity of the complicated trial design.  This is especially true for clinical trials using adaptive design and Bayesian design. This poses challenges to the DMC members.

We used to debate whether or not the data monitoring committee should be given the semi-unblinded materials for review where the treatment group is designated as "X" or "Y" instead of the true treatment assignment. The conference presentations said loudly that DMC should have sole access to interim results with complete unblinding (not semi-unblinding) on relative efficacy & relative safety of interventions.

Here are some points from Dr Fleming’s presentation “DMCs:Pomoting Best Practices to Address Emerging Challenges
Mission of the DMC
  • To Safeguard the Interests of the Study Participants
  • To Preserve Trial Integrity and Credibility to enable the clinical trial to provide timely and reliable insights to the broader clinical community
 Proposed Best Practices and Operating Principles
  • Achieving adequate training/experience in DMC process
  • Indemnification
  • Addressing confidentiality issues
  • Implementing procedures to enhance DMC independence
                DMC meeting format
                Creating an effective DMC Charter
                DMC recommendations through consensus, not by voting
                DMC contracting process
  • Defining the role of the Statistical Data Analysis Center
  • Better integration of regulatory authorities in DMC process

REFERENCES: