In industry 4.0, technology is continually evolving and improving the efficiency, productivity and safety of operations for manufacturing industries and even our home lives. Keeping up to date with technology gives companies the edge over the competition, but more importantly reduces risk and improves traceability. The COVID-19 pandemic has been a catalyst in accelerating change, especially in healthcare related industries; staying safe means more people need to work from home and more patients need remote support.

One of the most revolutionary changes in the way that clinical trials are handled in recent decades has been the shift from pen-and-paper processes to electronic data streams. However, predictive analytics, AI and machine learning has opened a new world of possibilities for data collation and analysis. With clinical trials growing on a global scale, what are the challenges that life science organisations are facing today?

With the Tufts Center for the Study of Drug Development, eClinical aimed to quantify the shifts in the industry that they were seeing with the clients that they worked with. They found that there has been a significant increase in the number of data sources that organisations draw from, such as biomarkers, physiological sensors, patient engagement and specialty labs. This is all data that needs to be analysed and stored, and organisations, most of which rely on outsourcing models, face the challenge of finding partners that can integrate these data sources and provide near real-time access to them.

Implementing a data pipeline strategy may be the solution. Data pipelines are used increasingly across all industries. They allow companies to consolidate data from multiple sources to make it available for analysis and visualisation. A key characteristic of a data pipeline is the automation of this process.

According to eClinical’s Chief Marketing Officer Sheila Rocchio: “There’s a much broader and richer ecosystem of data coming into research, and an increase in the variety, volume and velocity of all these data. Many of these new streams of digital data were not available even ten years ago. If you think of the Internet of Things model, there’s data flowing from many different places, and the purpose of a data pipeline is to build an infrastructure where you can consume these streams of data to garner new insights, without the need for manual work.

“A data pipeline sets up the connection to pull in information and to standardise it in a form that’s consumable. It can then share the information with end users, who are making decisions in near real time and collaborating around the same insights. Creating and implementing a clinical data pipeline strategy automates that process. It needs setting up just once as part of your clinical architecture for you to reap the benefits of it across numerous places in the clinical development process.”

“In clinical research, a key benefit to having a data pipeline is you can handle all your data streams, including operational data, by using technology infrastructure and a repeatable process. By having mechanisms in place, you can bring on a new data source automatically and check that the data meets expectations and standards. You can pull this new data stream into your central data hub to store it in a place you can access it again to benefit in the future. This data is then available for comparison across studies, which has previously been a challenge in research. Many self-service analytics capabilities are often limited to one study.”

The Tufts-eClinical Data Strategy & Transformation Study found that 5% of companies were still relying on manual efforts to integrate data, using software such as Excel or SAS to integrate data from different sources and silos. This can lead to numerous reconciliations, steps and challenges for stakeholders trying to access the most up to date data sources, leading to delays in analysis. For life science companies relying on up to eight different data vendors in just one trial, for example, this leads to delays in data access, visibility and oversight. It reduces the control clients have over their data and their ability to manage trial outcomes.

eClinical works with organisations to help them combat these challenges and develop   organisational data strategies that better align with today’s more chaotic data environment. “Companies need to determine their data flow, both for clinical trials and to benefit the organisation,” explains Rocchio.

“They ask when the right time is to combine different data streams, and whether it make sense to keep any types of data on a different path. What’s the right time to provide visibility and analytics? And, based on the different individuals involved in the clinical development process, who should be given access, and when?”

The Tufts study also found that those organisations that integrated data earlier experienced far fewer delays in cycle times for the ‘Last Patient Last Visit to Database Lock’ metric. These organisations experienced 40% faster cycle times on average, and were much more likely to have developed predictive analytics. This is something every life sciences organisation is keen figure out. Predictive analytics can better support trial operations and identify the actions needed to optimise enrolment and data quality, while proactively mitigating risks.

elluminate is eClinical’s end-to-end clinical trial data platform that helps deliver a data pipeline to clinical development organisations. It incorporates automated ingestion of all types of data with numerous APIs to other systems, including EDC, eCOA, IVRS, eTM and Labs.

Data flows through the elluminate platform and gets published to numerous stakeholders, across various functions, via a data review workbench. This includes data management, medical monitors, clinical operation and biostatistics for faster review, collaboration and insights.

The data mapping product within elluminate supports data transformation and curation, as it unifies and automates data from a variety of sources, and in a number of formats, for submission and analytics. Trial data is stored centrally for integrated patient profiles and analytics, and allows for comprehensive data visualisation and reporting.

Rocchio adds: “Companies should first build data pipelines, and centralise their data on one reliable platform which delivers significant value and cycle time benefits. Once all their data is available, they have a tremendous opportunity to leverage AI and machine learning algorithms, and to deliver greater insights to research teams, ultimately enhancing patient experiences and delivering faster times to market.”