If one studied the role of data manager as it is today and compared it to what the role entailed in, say, the 1970s, then what’s irrefutable is how the role has markedly changed over the years. Whereas before you had trial master files (TMFs), now you have electronic TMFs. Before you had patient reported outcomes (PROs), and now things have gone digital with ePROs.
As technologies advance, so too does the industry – no matter how seemingly slow it embraces change! But how will the industry, and more specifically, data managers adapt to the technological advances that lie in wait?
In this Industry Viewpoint, CTA Editor Henry Kerali sits down with Dr Raphalea Schnurbus, who’s the Clinical Solutions Director at OPIS. Here, she explains how data managers, from sponsors and CROs alike, must stay on their toes as the industry readies itself for the next technological paradigm.
Henry Kerali: What are some of the unique challenges data managers face in clinical trials?
Dr Raphaela Schnurbus: The days of thinking about data management as “boring data entry and cleaning” are long gone. Today, clinical trials ask for highly sophisticated study databases and the creation of a suitable case report form (CRF), which is fundamental. Risk-based monitoring and adequate processes ensure data integrity, data encryption and real-time data availability are now standard practice.
Technologies such as voice data entry, eCOA and Real World Evidence (RWE), allied with new regulations such as ICH GCP E6 (R2) and GDPR, emphasize how pharma and CROs are respecting guidelines and how these are implemented. GCP inspectors still look for the existing gap between regulation and implementation.
One needs data scientists in the place of data managers. They need programming skills and a good deal of statistical skills as well. Data handling teams of statisticians, data managers, web specialists and system developers are crucial, and data management needs redefining in this bigger context.
HK: You mentioned the General Data Protection Regulation (GDPR) – what impact could GDPR play in data collection processes?
RS: The meaning of data protection regulation is self-explanatory. The question is how to minimize security risks, prevent attacks, anonymize data, and control user access; just to name the principal ones.
The whole process of data collection and data handling has now become much more regulated. It’s now important to have appropriate documents proving how data was obtained and processed. With GDPR, companies have to be more accountable for their handling of people’s personal data. We’ll have to re-think and update our definitions of data transfer processes between companies and systems, as well as the accessibility of data to trial stakeholders. We have to prepare for eventual objections to data processing.
Moreover, companies might want to take time to assess the impact on their budgets because it is going to bring costs and generate a need for extra resources to be compliant.
HK: Could you see potential issues arising with GDPR that could have a bearing on how clinical trials are conducted? Or do you think we will see minimal impact?
RS: We are definitely going to see impact on clinical trial conduct and especially on clinical trial data. Three potential issues may be:
- Anonymizing data is not a new concept in our industry, but clearly defined processes of doing so, probably are.
- Our industry relies on consent. A lot of technical explanation about “why people’s data are being collected and processed, and how long and where data are going to be kept” will have to be implemented somewhere and ICFs might become even longer.
- The GDPR potentially gives individuals more power to access the data that are held and collected about them. Patients will definitely want to know what has happened as sharing trials results with them will become standard. This might in fact be the biggest issue of all – re-thinking trials design and trial conduct for sure.
HK: How do you see machine learning and artificial intelligence changing the role of data managers?
RS: The best answer I ever got was from an interview with Hilary Mason, the founder of Fast Forward Labs, a machine intelligence research firm.
“Ten years ago, it was all about big data — about whether we could build the infrastructure to get all the data into one place and to query it. Then came analytics, which is essentially counting things to answer questions and create business value or product value. New software made doing this affordable and accessible. And that led to the rise of data science, which is about counting things cleverly, predicting things, and building models on data.
“Machine learning is a set of tools inside data science that let you count things cleverly and incorporate feedback loops. We began using the models to get more data from the world, and then fed the data back into those models so that they improved over time. Now, today, we talk about AI, which basically means relying on machine learning, and specifically deep learning.
“Summing up: you can’t do AI without machine learning. You also can’t do machine learning without analytics, and you can’t do analytics without data infrastructure.”
Data scientists create data infrastructure and if we want to benefit from AI, you’ll need clever people to decide what goes in, and in what way it needs to come out.
HK: How can the industry best leverage these technological advances?
RS: There are numerous ways the industry can leverage these. It’s happening all over BUT for us to really benefit, it is important not to fall into a “believe all,” “magical quick fix” trap. People and patience will continue to make the difference. That means changing our mindset and mentality, while re-thinking clinical research and drug development!
HK: Lastly, what advice and key considerations would you give to data managers as we enter this new age of GDPR and machine learning?
RS: Keep learning! Prepare for a lot of change management and never lose sight of our main goals: Safeguarding the patient and ensuring the reliability of study results.