Ensuring clinical trial data is of high quality is critical; otherwise failure to do so could impact on endpoint reliability. Bill Byrom, vice president, product positioning and intelligence at Signant Health discusses the benefits of using electronic data capture methods to collect clinical outcome assessment (COA) data over traditional paper-based methods.

Q: What are the challenges in using traditional paper-based methods to collect clinical outcome assessments?

Bill Byrom: There are two key challenges in using paper methods, the first relates to data quality and the second is around data integrity. First, in terms of data quality, when patients complete patient-reported outcome measures (PROMs) using a paper diary at home, for example to collect daily symptoms ratings, this is often associated with data quality issues – such as missing data, ambiguous data, conflicting data, and the addition of superfluous data. These all create issues for the data manager, and querying these data is often impractical. We even observe similar data quality concerns when data are collected on paper in a supervised setting – such as during a site visit. For example, one study using the SF-36 quality-of-life questionnaire reported 44% of patients either missed or marked an item ambiguously.1

When assessing data integrity, it is crucial that the data is entered at the times required by the protocol. Data loses integrity when collected outside an appropriate recall period, as patients are unlikely to be able to remember accurately their health status too far back in time. An important study using a diary containing a light-sensitive chip was able to compare the entries in a paper diary to the actual opening and closing of the diary booklet – and showed that while apparent completion compliance of paper entries was over 90%, only 11% were recorded during the scheduled time intervals.2 This is evidence of “the parking lot effect” – where patients fill up their diary in the carpark, just before their clinic visit.

In 2009, the Food and Drug Administration (FDA) published patient-reported outcomes guidance that notes if a sponsor has unsupervised patient diaries, they will review the steps that the sponsor has taken to ensure entries were made at the right time during a trial.

Q: What are some of the common obstacles during eCOA migration?

BB: When migrating from paper to electronic, care needs to be taken to ensure that the measurement properties of the original measure are unchanged. A lot of validation work goes into the development of a PROM, and so it’s important that new formats of the measure are comparable and that the original validation work equally applies. Format changes might inadvertently change the measurement properties of an instrument, and as a result, may affect the validity of the data. When formatting a paper questionnaire for a smartphone screen, it will be common to make changes such as breaking blocks of questions into a single question per page, and transposing verbal response scales from a horizontal list across the page to a vertical list down the screen.

In 2009, the International Society for Pharmacoeconomics and Outcomes Research (ISPOR) published a best practices taskforce report, with recommendations on what evidence is required to support paper to electronic migration. Since then, many studies have taken place and we have learnt that if eCOA best practices are being followed, measurement properties of common scales and measures are unaffected. A second taskforce report that was published in 2023 has revised those recommendations, based on this growing body of evidence supporting paper and electronic COA measurement comparability. There are a number of good sources describing eCOA best practices, including those of the Critical Path Institute’s eCOA Consortium.3 However, perhaps the most comprehensive description of best practice standards can be found in the textbook I have co-authored, the second edition published earlier this year.4 

Q: What are the benefits of using electronic methods to capture data?

BB: Both the quality and the integrity of the data are improved with electronic formats. In terms of data quality, built-in logic checks and branching can eliminate some of the challenges around conflicting and missing data. Further, ePRO solutions often include reminders and alarms to help patients remember to complete their measures on time via their smartphone app. This drives complete datasets. Electronic solutions prevents patients from inadvertently missing questions. Apps can either prevent progress until a response is made, or request patients indicate that the question has been intentionally skipped – eliminating ambiguity regarding reasons for missing values.

When assessing data integrity, it’s about ensuring the timeliness of entry. There is a time and date stamp for when a response is entered, so when the FDA is assessing the contemporaneousness of the data it can see when data was entered within an appropriate time interval. A completion window can also ensure a patient completes a diary entry at the right time. For example, between 6pm and 10pm patients can enter data; outside that time frame, the window closes.

Q: How can electronic data capture solutions and trial data monitoring strategies optimise endpoint reliability?

BB: When looking at clinician ratings, such as the Hamilton depression rating scale, investigators typically conduct a 30-minute interview with a patient. During that interview, the investigator must tease out the severity of the different aspects of depression against 17 items within that rating scale, by using probing questions. One of the key considerations is to ensure that everything is standardised to ensure every rater carries out the assessment in the same way and uses the same scoring approach. A data capture solution in this context can not only enable the data to be captured, but can prompt to ensure the assessment is carried out in a standardised way and with correct scoring. This serves to reduce the variability between raters and improve the chance of detecting a treatment-related difference, if that exists.

In this scenario, there are a few ways to use solutions to monitor endpoint quality throughout a trial. One way this can be done is by recording a patient interview. With an audio or video interview, a central rater can effectively QC the investigator ratings, and this can be used to provide feedback to ensure ongoing rating quality.

Blinded data analytics, conducted by experienced data scientists, is another option where a data scientist is examining the data as it’s being collected. It allows the application of statistical approaches to assess the consistency and variability of the data, and identify sites or raters that may need an intervention (such as feedback or training) to ensure ongoing high quality across the study.

Similarly, when looking at patient-reported outcomes data, the same statistical approaches can be used to assess the consistency and reliability of the data, and even (in rare cases) to detect fraud.

Q: How can electronic data capture methods help detect fraud in clinical research?

BB: In various ways. For example, when looking at digit preference, there might be a pattern in responses showing a preference for certain digits – threes and sevens might be more preferable than twos and eights. There might also be unusual patterns in data variability, or completion behaviour amongst individuals or groups of patents that may warrant further investigation. 

Q: How can Signant Health’s solutions ensure reliability and consistency of data collected?

BB: There are a number of ways. First, by using good science and implementation best practices – to ensure good measurement strategies are in place to ensure study objectives can be met, and the implementation follows good practice so that data are suitable for regulatory submission.

Second, by providing solutions with built-in logic to drive data quality – such as edit checks, branching logic, skipping rules and completion windows. 

Third, by encouraging PROM completion through in-app reminders and alarms, and email escalations to sites to enable proactive monitoring and encouragement of patient completion compliance. This leads to good completion rates, supporting reliable inference making. 

And finally, for clinician ratings, by providing rater training and qualification services to ensure ratings and scorings are conducted in a standardised manner across all sites and raters; and by using blinded data analytics to monitor and mitigate ratings quality throughout the study.

References:

1 Ryan JM, Corry JR, Attewell R et al.  A comparison of an electronic version of the SF-36 General Health Questionnaire to the standard paper version. Quality of Life Research 2002; 11: 19-26.

2 Stone AA, Shiffman S, Schwartz JE t al.  Patient non-compliance with paper diaries.  British Medical Journal 2002; 324: 1193-1194.

3 Critical Path Institute eCOA Consortium. Best Practices for Electronic Implementation of Patient-Reported Outcome Response Scale Options. Available from: https://c-path.org//wp-content/uploads/2014/05/BestPracticesForElectronicImplementationOfPROResponseScaleOptions.pdf; Critical Path Institute eCOA Consortium. Best Practices for Migrating Existing Patient-Reported Outcome Instruments to a New Data Collection Mode. Available from: https://c-path.org//wp-content/uploads/2014/05/BestPracticesForMigratingExistingPROInstrumentstoaNewDataCollectionMode.pdf

4 Byrom B, Muehlhausen W. Electronic Patient-Reported Outcome Measures: An Implementation handbook for Clinical Research, Second Edition.  ISBN: 979-838-792-2077. Available from: https://amzn.eu/d/eTC5nQ6


Discover more about clinical data collection methods by downloading the whitepaper below.