Artificial intelligence (AI) is becoming an increasingly important facet of the clinical trials landscape; however, trust in its ability remains a barrier to widespread adoption.
While its potential expands, the segment must work through some challenges to fully harness its potential and ensure more trust in its outputs.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
At the Arena International’s Outsourcing Clinical Trials DACH 2025 conference, which is being held in Zurich, Switzerland, from 12 to 13 November, attendees heard how associated hurdles with AI can be overcome to drive greater efficiency.
These conversations come at a time when 80% of trials fail to meet their initial enrolment targets.
Promoting AI to late adopters
During the keynote session, Piotr Maslak, senior director and head of emerging technologies at AstraZeneca, noted a key challenge for AI is ensuring the technology’s widespread implementation.
“When running pilots, we are catering to a limited group of early adopters, who are forgiving as they are excited about the technology,” Maslak said.
US Tariffs are shifting - will you react or anticipate?
Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.
By GlobalDataHowever, among the members who are later to the game, lower adoption rates are common, Maslak added. “Instead of looking at the data, we’re typing in a prompt into AI and getting an output, which some people do not trust.”
According to Maslak, this phenomenon highlights the necessity for change management. To achieve this, he believes that companies must look into enterprise training to make staff more comfortable with AI tools, while understanding their real use cases.
“When it comes to AI technology, it’s not a lift and shift; it’s a change of paradigm. As an industry, we need to change the way we work, our habits and how we interact,” Maslak said.
Siloing AI capabilities based on context
When it comes to implementing large language models (LLMs) into clinical trial processes, Maslak noted that the technology should be applied to specific use cases.
“LLMs are not 100% suited for mathematical operations, so working on data using generative AI (GenAI) might be tricky and requires a lot of considerations in technical solutions to ensure that you can position it to succeed,” Maslak said.
To achieve this, AstraZeneca employs a human-centric approach. “We look at AI like we look at people; in a team structure, there is never just one person that is a jack of all trades that can answer all your questions,” Maslak noted.
As a result, the big pharma company has built a team of AI analysts, which help to answer specific questions. To allow communication between these technologies, AstraZeneca is experimenting with agent-to-agent collections, which first went live in January 2025.
However, Maslak mentioned that these technologies – much like humans – struggle with the large data sources, as this scales the context they must operate in. To mitigate issues associated with AI’s lack of context, Maslak noted that GenAI’s application must be split into segments.
“In a silo, it’s much simpler, as an R&D or clinical operations agent will know the context from the controlled vocabulary, and it will know what to reference.”
This means that companies can route connections so that an agent does not access the whole database, and instead looks for the answer in a specific context. “LLMs look at data like humans do, so you need to have natural language instructions related to what is in the data,” Piotr concluded.
