The Reality of Data Management in Medical Device Trials

12th August 2015 (Last Updated July 16th, 2018 10:11)

Adrian Orr, Clinical Research Director, Haemonetics, explores the challenges of conducting clinical trials in the blood management space

The Reality of Data Management in Medical Device Trials

Over the past decade connectivity and integration of data has been refined to such an extent that virtually any digital transaction can be performed remotely on mobile devices. This certainly applies to the financial, commercial and entertainment industries but in medical device clinical research - not so much. The technology certainly exists both within medical devices and within investigational sites to generate data electronically, download it to an EDC platform, clean the dataset and then analyze data for reporting. Many technologies exist to assist in each element of a clinical trial and there is growing interest in reducing trial costs through automation of data collection, monitoring remotely and uploading efficiently for statistical analysis. While the clinical trial industry market seems to be enthusiastic to digitally connect your data, the reality is somewhat different. During the trial initiation, when the point of execution comes it is rare that data flows seamlessly from the site of generation to the point of reporting. While technology moves ahead to try and seamlessly connect data in clinical trials, the question may be - is it worth it?

Haemonetics conducts clinical trials in the blood management space, typically in apheresis or whole blood donations. These trials are ideal for automation as subjects are regular blood donors enrolled at well-established research sites that perform highly standardized procedures and laboratory assays. The concept for data generation is usually the same in each trial:

  • Enrollment
  • Demographics
  • Concomitant medications
  • Blood donation procedure
  • Adverse event reporting
  • Blood processing - filtering, separating into RBCs, Plasma and / or Platelets
  • Blood component assays - routine tests over the storage period such as complete blood count (CBC), pH, hemoglobin, ATP, glucose, lactate.

So why is extracting, collecting and processing data from these standardized tests, conducted in routine fashion, so difficult? At each stage there are potential options to generate data electronically, connect and upload to EDC but the nuances of each trial make this difficult to achieve technical and within a reasonable budget and timeframe.

The process of automating and connecting donor blood collection is a recent development with products such as Donor Doc and Donor Doc Phlebotomy but even if a site is using an electronic system it is unlikely to match the exact data points needed for the clinical trial. So source documentation for demographics and concomitant medications needs to be created in the medical records and paper still remains the preferred option.

The blood donation is straightforward with basic data being generated by blood collection mixers. However apheresis procedures generate significant data since blood is separated by centrifugation, the required blood components are collected and the remainder is returned to the donor. In either case, trial specific data is again needed and the formats of data exported by even our own devices are not configured for clinical research. So importing directly into EDC would be difficult without filtering and processing huge volumes of data to parse out the relevant data for the trial.

Adverse events are minimal compared to traditional trials (vasovagal reactions, hematomas, citrate reactions) but electronic reporting of adverse events in the blood donation industry is not typical, so all reporting is manual and differ slightly between sites.
Blood processing tends to be product specific and if not conducted as part of the apheresis procedure the data generation would be manual. Flow cytometers and blood gas analyzers come with connectivity software that are designed to link to Laboratory Information Systems (LIS). Technically, a trial EDC platform could be connected to a site's LIS and data downloaded. The result is usually an excessive data dump that needs to be sorted through by Data Managers. Only a few laboratory based assays are conducted on equipment that can be connected to a network. Many remain manual or performed on standalone devices where connectivity would be laborious.

The reality remains that medical device trials, particularly in the blood management space are derived from paper sources that need to manually entered into EDC. In some respects this may not be a bad thing. The automation of data flow can be highly convenient and the ability to see data remotely, monitor and up load to a statistician can be very appealing. Increasingly there is a "hands-off" approach to data management by relying on complex edit checks to filter out erroneous data, remote monitoring to save the cost of travel and using site monitors to report back data issues. In medical device trials this is a dangerous approach as the quality of data can only be appreciated by assessing how each CRF element was generated. The manual processes that still exist provide telling evidence of how well the trial was conducted. Only seeing the electronic version of data and assuming that the Data Manager's edit checks have caught all possible issues leaves the sponsor vulnerable. During physical audits regulatory bodies would likely discover the trails of erroneous data.

The same approach applies to the analysis of the data. The use of SAS based on a Statistical Analysis Plan to generate the multiple iterations of tables, listings and figures, and assuming that the biostatistician's QC methods will have removed any erroneous calculations, leave the similar risk of over-reliance on programmed analysis. Careful review of the data created in terms of units used, outliers, the subject disposition listing and calculations used, should all be done manually and separately from the biostatistician's SAS programming. In our experience, many errors occur simply by different units being provided by the site and this not being identified in the analysis. The understanding of which data can be used for each subject quickly becomes complex in many trials when subjects withdraw half way through the trial or subsets of testing become non-evaluable due to deviations in testing but other data remains evaluable. Cross-checking that biostatisticians and data managers are compiling the correct data sets becomes difficult to do in any automated fashion.

So while the elements of data generation in clinical trials are becoming digital and connected, the robustness of the regulatory submission will always need confirmation with a hands-on, manual approach to ensure that data is simply not churned out from the site to the regulatory body. The responsibility always remains with the sponsor to ensure the integrity of data generated by investigational sites. This is best done by spending time with the data and getting to understand what the data is telling you. While connecting the various data flows will help; ultimately, management of clinical trial data is an intellectual challenge that humans are still best suited to do.


Adrian Orr
Clinical Research Director