The promise of precision medicine dictates that we approach disease with a focus on the patient and the patient’s individual characteristics. This patient centric perspective is increasingly possible due to advances in genomic research and the development of highly specific pharmacologic/biologic interventions. If this promise is to be realized we must provide treatments where benefit/risk is optimized and primary outcomes are meaningful to the patient. In this environment, regulators, physicians, and patients must receive robust data sufficient to make therapeutic decisions.
Impeding this promise is the difficulty in conducting clinical development programs that achieve the goals of precision medicine. Customized treatments are increasingly segmenting patient populations into smaller and smaller subsets making patient identification and recruitment more difficult. Rare diseases occurring in relatively small numbers of patients but with high unmet medical need also involve finding and treating limited numbers.
Complicating data interpretation, it may not be ethical to use placebo treatment arms for comparison purposes. New biomarkers and innovative clinical endpoints are often not fully validated before being utilized in trials exploring novel treatments. In particular, the evaluation of safety and therapeutic index is increasingly complicated by these factors.
Accelerated Approvals key to Getting Drugs Faster to Market
In order to facilitate faster drug development, the FDA and other regulatory bodies have designed processes, such as priority review, fast track or breakthrough therapy designations, and accelerated approvals. This is seen as crucial to allowing highly innovative therapies reach patients in shorter time intervals. Accelerated approvals are often granted with the expectation that confirmatory trials will be conducted post-approval.
Rapid development plans, adaptive trial designs, and reduced patient treatment years have led to the blurring of the traditional separation of phase I, II and III even in therapeutic areas outside of oncology. In many cases, patients are treated in phase I, healthy normal volunteers never receive treatment, and phase II/III trials are combined and/or reduced in scope with limited or no placebo data available for comparisons.
During a streamlined development process, early exploratory studies are often designed to be adaptive in nature. This may involve treating small numbers of patients until a positive response is observed and then expanding only that cohort. In these trials, the adaptive nature of the protocol allows exploration into different patient inclusion/exclusion criteria, treatment doses, or even diseases all within one study. There is tremendous benefit to this type of study design because in a relatively limited number of patients and at reduced cost and length of time, many boundaries can be placed on the expectations for the treatment.
However, there are problems with this type of design. Study hypotheses in these protocols become more complicated to test adequately, there may be a large number of pre-specified analyses, and patient experience in any single group is limited. These potential issues are magnified when new genomic characteristics are utilized for study entry, when the primary endpoint is an exploratory biomarker, or where there are no prior studies for comparison.
In response to reduced clinical programs prior to approval, sponsors and regulators rely on historical controls when placebo data are not available and real-world safety and efficacy data are generated in patients outside of standard clinical trials. These solutions however fall short of the gold standard of clinical evaluation. That standard requires a program with at least two relatively large placebo-controlled double blind pivotal trials. Complicating this issue is the potential for delay in conducting confirmatory trials for treatments that have received accelerated approval. Additionally, trials that fail to meet their primary endpoints are not routinely published and this failure makes complete evaluation of a new therapy difficult.
Phases I-III no longer Operational
As drug developers, it is our responsibility to deliver robust data sets to regulators, physicians, and doctors so that they can correctly utilize innovative agents. We must control the time and cost of development programs so that patient access is ensured as rapidly as possible. We must also provide confidence that our data are robust, able to be replicated, and will withstand scrutiny once the treatment is available commercially.
Given the conflicts inherent in attempting to rapidly provide new treatments to patients, and the requirement for robust data, how should we design clinical programs? What strategies can be considered in early drug development that will provide more robust data sets that can be considered for an accelerated approval strategy? No single approach will ensure that every development program achieves its objectives.
However, we need to acknowledge that the previous paradigm of phase I, II and III is no longer operational in many development programs. Under these circumstances, the inherent ability of a program to explore dose range and patient inclusion/exclusion criteria in a smaller phase II study or studies, and then replicate that finding in two large phase III trials, is no longer applicable.
One solution to this situation is to approach the development program from a perspective of “learning and confirming” rather than traditional phase I, II and III. How would this change the initial design of clinical programs? First of all, this perspective would force consideration of how data are replicated within the clinical trial sequence. This is even in programs where the traditional conduct of two parallel pivotal studies is not envisioned.
Thus, in early studies where adaptive designs are utilized, and/or where very small numbers of patients are exposed to a treatment, the subsequent protocols should be reviewed with an eye towards maintaining the conditions that led to the successful benefit/risk profile in the prior study. Too often we learn about a treatment in an early stage study but then seek to continue learning by loosening patient entry criteria, adding different endpoints, or otherwise altering key components of the earlier trial. While this process is necessary in order to continue the learning process, the concept of simply ensuring that the earlier result can be replicated or confirmed is lost.
Replication Vital to Generating Robust and Trustworthy Data
What can be done differently? First, at some point in the development program a direct and simple protocol that is designed to replicate an earlier finding should be considered. Alternatively, in a larger subsequent adaptive study, an evaluation should be made on whether any arm in that study is replicating a previous result. If the answer is no, such an arm should be added.
Also, inherent in this strategy is the commitment to conduct large confirmatory trials at some point, and to publish all data on any new treatment. This is so that evaluation of the treatment is made with all of the available information. Even in programs where replication has been considered and achieved, the acquisition of real world data is critical to ensure the results are robust and able to be replicated.
It is an especially rewarding experience to develop new treatments for major unmet medical needs. Everyone involved in the process strives to generate new and exciting data. As we engage in this process however it is critical to ensure that the data we are generating are robust and trustworthy. One of the essential ways of ensuring this is to replicate them.