Thursday, February 22, 2018

New FDA Guidance Documents for Drug Development in Neurological Conditions Aiming to Ease the Drug Approval Paths


This month, we saw FDA issued five guidance documents for drug development in five different neurological conditions/diseases (Alzheimer’s disease, DMD, ALS, Migraine, and Pediatric epilepsy). These newly issued guidance documents are intended to ease the drug approval requirements or offer the charities for the drug development pathway.

We think that this is a general trend in FDA and we expect that the similar guidance documents will be issued for other conditions/diseases aiming to ease the requirements for drug development – eventually speed up the drug development process, and the innovative drugs available to patients.
“Today I’m pleased to issue five guidance documents that benefited from the streamlined approach of this pilot as part of a broader, programmatic focus on advancing treatments for neurological disorders that aren’t adequately addressed by available therapies. These guidance documents provide details on how researchers can best approach drug development for certain neurological conditions – Duchenne muscular dystrophy (DMD) and closely related conditions, migraine, epilepsy, AD and ALS. These guidance documents provide our current thinking and sound regulatory and scientific advice for product developers so that safe and effective treatments can ultimately be made available to patients. These documents are each a culmination of thoughtful scientific collaboration within the agency and incorporate important input from patients, researchers and advocates. We hope that providing up-to-date, clear information about our scientific expectations, such as clinical trial design and ways to measure effectiveness, will save companies time and resources and ultimately, bring effective new medicines to patients more efficiently.”
Below is a table to summarize the key points from these five guidance documents:
Indication
Guidance Title
Key Points
Alzheimer's disease
  • No longer requiring co-primary efficacy endpoints to show the benefit in both cognitive and functional (or global) measures
  • Staging the AZ as four different stages accepting different endpoints for different stages
  • Allowing biomarker effects to be the primary
  • endpoint in patients with Alzheimer pathology but no current symptoms

Duchenne muscular dystrophy (DMD) and related conditions
  • Emphasizing the difficulties in designing trials of drugs for these conditions. 
  • Efficacy endpoints, which basically leave it up to individual study sponsors to discuss with FDA staff the best approach on a case-by-case basis
  • the DMD guideline did not do, is open a path for approval based solely on biomarker effects such dystrophin levels in muscle, although, effects on objective measures such as respiratory and cardiac muscle function can be used to support approval.

Amyotrophic lateral sclerosis (ALS)
  • Offering more clarity
  • Efficacy must be demonstrated at "clinically meaningful" levels for symptoms, function, or survival -- period
Migraine
  • Sponsors would no longer be required to conduct trials addressing four different classes of symptoms: pain, nausea, photophobia, and phonophobia.
  • Trials will only need two primary endpoints: pain reduction and effects on individual patients' "most bothersome symptom."

Pediatric epilepsy

  • For drugs intended for children age 4 and older with partial onset seizures, the FDA will no longer require that efficacy trials be conducted in children. The agency will now consider efficacy data from adult patients to be sufficient for pediatric approval.



Monday, February 12, 2018

Weighted Bonferroni Method (or partition of alpha) in Clinical Trials with Multiple Endpoints


In a previous post, the terms of ‘multiple endpoints’ and ‘co-primary endpoints’ were discussed. If a study contains two co-primary efficacy endpoints, study is claimed to be successful if both endpoints have statistical significance at alpha=0.05 (no adjustment for multiplicity is necessary). If a study contains multiple (two) primary efficacy endpoints, the study is claimed to be successful if either endpoint is statistically significant. However, in later situation, the adjustment for multiplicity is necessary to maintain the overall alpha at 0.05. In other words, for hypothesis test for each individual endpoint, the significant level alpha is less than 0.05.

The most simple and straightforward approach is to apply the Bonferroni correction. The Bonferroni correction compensates for the increase in number of hypothesis tests. each individual hypothesis is tested at a significance level of alpha/m, where alpha is the desired overall alpha level (usually 0.05) and m is the number of hypotheses. If there are two hypothesis tests (m=2), each individual hypothesis will be tested at alpha=0.025.

In FDA guidance 'Multiple Endpoints in Clinical Trials', the Bonferroni Method was described as the following:
The Bonferroni method is a single-step procedure that is commonly used, perhaps because of its simplicity and broad applicability. It is a conservative test and a finding that survives a Bonferroni adjustment is a credible trial outcome. The drug is considered to have shown effects for each endpoint that succeeds on this test. The Holm and Hochberg methods are more powerful than the Bonferroni method for primary endpoints and are therefore preferable in many cases. However, for reasons detailed in sections IV.C.2-3, sponsors may still wish to use the Bonferroni method for primary endpoints in order to maximize power for secondary endpoints or because the assumptions of the Hochberg method are not justified. The most common form of the Bonferroni method divides the available total alpha (typically 0.05) equally among the chosen endpoints. The method then concludes that a treatment effect is significant at the alpha level for each one of the m endpoints for which the endpoint’s p-value is less than α /m. Thus, with two endpoints, the critical alpha for each endpoint is 0.025, with four endpoints it is 0.0125, and so on. Therefore, if a trial with four endpoints produces two-sided p values of 0.012, 0.026, 0.016, and 0.055 for its four primary endpoints, the Bonferroni method would compare each of these p-values to the divided alpha of 0.0125. The method would conclude that there was a significant treatment effect at level 0.05 for only the first endpoint, because only the first endpoint has a p-value of less than 0.0125 (0.012). If two of the p-values were below 0.0125, then the drug would be considered to have demonstrated effectiveness on both of the specific health effects evaluated by the two endpoints. The Bonferroni method tends to be conservative for the study overall Type I error rate if the endpoints are positively correlated, especially when there are a large number of positively correlated endpoints. Consider a case in which all of three endpoints give nominal p-values between 0.025 and 0.05, i.e., all ‘significant’ at the 0.05 level but none significant under the Bonferroni method. Such an outcome seems intuitively to show effectiveness on all three endpoints, but each would fail the Bonferroni test. When there are more than two endpoints with, for example, correlation of 0.6 to 0.8 between them, the true family-wise Type I error rate may decrease from 0.05 to approximately 0.04 to 0.03, respectively, with negative impact on the Type II error rate. Because it is difficult to know the true correlation structure among different endpoints (not simply the observed correlations within the dataset of the particular study), it is generally not possible to statistically adjust (relax) the Type I error rate for such correlations. When a multiple-arm study design is used (e.g., with several dose-level groups), there are methods that take into account the correlation arising from comparing each treatment group to a common control group.
The guidance also discussed the weighted Bonferroni approach:
The Bonferroni test can also be performed with different weights assigned to endpoints, with the sum of the relative weights equal to 1.0 (e.g., 0.4, 0.1, 0.3, and 0.2, for four endpoints). These weights are prespecified in the design of the trial, taking into consideration the clinical importance of the endpoints, the likelihood of success, or other factors. There are two ways to perform the weighted Bonferroni test:  
  • The unequally weighted Bonferroni method is often applied by dividing the overall alpha (e.g., 0.05) into unequal portions, prospectively assigning a specific amount of alpha to each endpoint by multiplying the overall alpha by the assigned weight factor. The sum of the endpoint-specific alphas will always be the overall alpha, and each endpoint’s calculated p-value is compared to the assigned endpoint-specific alpha.
  • An alternative approach is to adjust the raw calculated p-value for each endpoint by the fractional weight assigned to it (i.e., divide each raw p-value by the endpoint’s weight factor), and then compare the adjusted p-values to the overall alpha of 0.05.
These two approaches are equivalent

The guidance mentioned that reason for using the weighted Bonferroni test are:
  • Clinical importance of the endpoints
  • The likelihood of success
  • Other factors
Other factors could include:
  • With two primary efficacy endpoints, the expectation for regulatory approval for one endpoint is greater than another
  • Sample size calculation indicates that the sample size that is sufficient for primary efficacy endpoint #1 is overestimated for the primary efficacy endpoint #2 
With the weighted Bonferroni correction, the weights are subjective and are essentially arbitrarily selected which results in the partition of unequal significant levels (alphas) for different endpoints.

There are a lot of applications of Bonferroni and weighted Bonferroni in practice. Here are some examples: 
In the publication Antonia 2017 "Durvalumab after Chemoradiotherapy in Stage III Non–Small-Cell Lung Cancer", two coprimary end points were used in the study
The study was to be considered positive if either of the two coprimary end points, progression free or overall survival, was significantly longer with durvalumab than with placebo. Approximately 702 patients were needed for 2:1 randomization to obtain 458 progression-free survival events for the primary analysis of progressionfree survival and 491 overall survival events for the primary analysis of overall survival. It was estimated that the study would have a 95% or greater power to detect a hazard ratio for disease progression or death of 0.67 and a 85% or greater power to detect a hazard ratio for death of 0.73, on the basis of a log-rank test with a two-sided significance level of 2.5% for each coprimary end point.
However, in the original study protocol, the weighted Bonferroni method was used and unequal alpha levels were assigned to OS and PFS.  
The two co-primary endpoints of this study are OS and PFS. The control for type-I error, a significance level of 4.5% will be used for analysis of OS and a significance level of 0.5% will be used for analysis of PFS. The study will be considered positive (a success) if either the PFS analysis results and/or the OS analysis results are statistically significant.
Here, a weight of 0.9 (resulting in an alpha 0.9 x 0.05 = 0.045) was given to OS and a weight of 0.1 (resulting in an alpha 0.1 x 0.05 = 0.005) was given to PFS.

In COMPASS-2 Study (Bosentan added to sildenafil therapy in patients with pulmonary arterial hypertension), the original protocol contained two primary efficacy endpoints and weighted Bonferroni method (even though it was not explicitly mentioned in publication) was used for multipolicy adjustment. A weight of 0.8 (resulting in an alpha 0.8 x 0.05 = 0.04) was given to time to first mortality/morbidity event and a weight of 0.2 (resulting in an alpha 0.2 x 0.05 = 0.01) was given to the change from baseline to Week 16 in 6MWD.
The initial assumptions for the primary end-point were an annual rate of 21% on placebo with a risk reduced by 36% (hazard ratio (HR) 0.64) with bosentan and a negligible annual attrition rate. In addition, it was planned to conduct a single final analysis at 0.04 (two-sided), taking into account the existence of a co-primary end-point (change in 6MWD at 16 weeks) planned to be tested at 0.01 (two-sided). Over the course of the study, a number of amendments were introduced based on the evolution of knowledge in the field of PAHs, as well as the rate of enrolment and blinded evaluation of the overall event rate. On implementation of an amendment in 2007, the 6MWD end-point was change from a co-primary end-point to a secondary endpoint and the Type I error associated with the single remaining primary end-point was increased to 0.05 (two-sided).
According to FDA’s briefing book on” Ciprofloxacin Dry Powder for Inhalation (DPI)
Meeting of the Antimicrobial Drugs Advisory Committee (AMDAC) “, the sponsor (Bayer) conducted two pivotal studies: RESPIRE 1 and RESPIRE 2. Each study contained two hypotheses. Interestingly, for multiplicity adjustment, the Bonferroni method was used for RESPIRE 1 study and the weighted Bonferroni method for RESPIRE 2 study. We can only guess why weights of 0.02 and 0.98 (resulting in a partition of alpha of 0.001 and 0.049) was chosen in RESPIRE 2 study
RESPIRE 1 Study:
  • Hypothesis 1: ciprofloxacin DPI for 28 days on/off treatment regimen versus pooled placebo (alpha=0.025)
  • Hypothesis 2: ciprofloxacin DPI for 14 days on/off treatment regimen versus pooled placebo (alpha=0.025)
RESPIRE 2 Study:
  • Hypothesis 1: ciprofloxacin DPI for 28 days on/off treatment regimen versus pooled placebo (alpha=0.001)
  • Hypothesis 2: ciprofloxacin DPI for 14 days on/off treatment regimen versus pooled placebo (alpha=0.049)

Thursday, February 01, 2018

Handling site level protocol deviations

In previous post, the CDISC data structure for protocol deviations was discussed. The protocol deviation data set (DV domain) is an event data set (just like how we record the adverse event). The tabulation data set should contain one record per protocol deviation per subject. In other words, each protocol deviation is always tied to each individual subject. In DV data set, each record of the protocol deviations should have an unique identifier for subject ID (usubjid).

There are situations where the protocol deviations are on the site level, not the subject level. For example, many study protocols have a specific requirement for handling the study drugs (or IP - investigational products). The study drug must be stored under the required temperatures. An temperature excursion occurs when a time temperature sensitive pharmaceutical product is exposed to temperature outside the ranges prescribed for storage. The temperature excursion may result in inactivation of the study drug efficacy or cause safety concern. If there are multiple subjects enrolled in the problematic site, the protocol deviation associated with temperature excursion will have impact on all subjects at this site - this is called the site level protocol deviation.

There is no specific discussion about documenting and handling site level protocol deviations in ICH and CDISC guidelines.

According to CDISC SDTM, Protocol Deviations should be captured in DV domain. According to current SDTM standard, all tabulation data sets including DV are designed for subject data (with the only exception of Trial Design info).

For site level deviations, the deviations are not associated with any specific subjects, they can not be directly included in the DV data set. There may be two ways to handle the site level protocol deviations:

  • Document the site level protocol deviations separately from the subject level protocol deviations. Then document them in Clinical Study Report (CSR) and in Study Data Reviewer's Guide (SDRG) if applicable.
  • If any site level deviation has impact on all or multiple subjects enrolled at that site, the specific deviation can be repeated for each affected subject
It is advisable to pre-specify the instructions for handling the site level protocol deviations so that the site level protocol deviations are recorded appropriately.


Identifying and recording the protocol deviations including site level protocol deviations should be an ongoing process during the conduct of the clinical trials. If we wait until the end of the study, we may have difficulties to determine if a specific site level deviation has impact on all subjects or partial subjects at that site.