It has been well reported that the Covid-19 pandemic had a direct impact on the execution of clinical trials, such as patients being unable to visit clinical sites or procedures being affected by staff shortages. Now, questions have arisen about the best way to analyse missing data due to disrupted assessments and retain statistical power without introducing bias.
Clinical trials that were using time-to-event endpoints are particularly prone to misalignment between the scientific question that the trial intends to answer and the statistical analysis. For example, progression-free survival (PFS), one of the most frequently used endpoints in oncology trials, is a major concern, state authors of a recently published research paper.
Even though PFS can be measured using clinical signs, symptoms, or tumour marker levels, radiographic assessments are the preferred method. In the scenario of a global shutdown caused by a pandemic and prolonged periods between assessments, PFS is “vulnerable to data missingness,” authors state.
Approaches to estimate PFS
To combat the inevitable loss of data and biased PFS data in the context of a pandemic, the authors of the paper explore a framework based on estimands, a concept that was introduced recently in the International Conference on Harmonisation E9 (R1) guideline addendum. By considering intercurrent events that preclude outcome measurements, this framework allows for a more precise definition of the target of estimation and ensures that the statistical analysis is better aligned to the scientific question. Using a simulated six-month shutdown as an intercurrent event, the authors investigate two approaches to handling missing data: a treatment policy strategy and a hypothetical strategy.
The treatment policy approach ignores intercurrent events and discloses the disease progression with delayed assessments and other pandemic-related issues. The hypothetical strategy censors patients whose progression or death happened during the shutdown at the last assessment.
Censoring means that patients will not be entered into the end analysis because of factors such as loss of follow up or toxicity, explains Rachel Woodford, immuno-oncology fellow at the Melanoma Institute Australia.
While patient censoring allows researchers to complete the analysis in the presence of missing data, it also creates bias, Woodford adds. Disregarding patients who come off the study drug due to toxicity could lead to approval or trial progression of a drug that may be unnecessarily toxic to other patients, she says.
The authors found that while the treatment policy approach slightly overestimated the median PFS, which was expected since the assessments were conducted after the shutdown period, the statistical power was not affected. In contrast, the hypothetical strategy approach overestimated the median PFS times to a greater extent, and the smaller number of events decreased the statistical power. In a case against censoring, the authors conclude that the treatment policy strategy should remain the primary method of assessment, with Woodford calling it a logical conclusion.
Endpoints affected by the pandemic
Time-to-event endpoint sensitivity to disruptions poses a scenario with potentially misleading data coming from clinical trials that were conducted during the pandemic. According to GlobalData’s Clinical Trials Database, 863 industry-sponsored clinical trials with PFS as a primary endpoint were initiated between 1 June, 2019 and 31 December, 2021. AstraZeneca is leading the list of top 10 sponsors measuring PFS throughout the pandemic, with 47 clinical trials. GlobalData is the parent company of Clinical Trials Arena.
PFS is not the only endpoint that was affected by the Covid-19 pandemic. Woodford explains that any time-to-event endpoint that required measurements that could not be attended could be prone to error, such as time to second progression (PFS-2) or time to second subsequent therapy (TSST).
Overall survival (OS) is easier to verify than PFS. However, in the context of the pandemic, patients could have died because of SARS-CoV-2 infection and not their disease, potentially affecting the data reliability as well, she says.
“One would hope that the presentations and publications of the data that follow would mention if there were significant effects caused by the pandemic and missed visits,” Woodford notes.