Artificial Intelligence (AI) is increasingly framed as a transformative tool that can address inefficiencies across therapeutic research and development (R&D). However, upgrading its use beyond disparate pilots, while noting potential patient concerns, remains a key challenge.
With an abundance of AI-based solutions entering the market, industry experts at the Outsourcing in Clinical Trials Europe 2026 Conference shared insights into how these technologies are being embedded in existing workflows, from accelerating drug discovery to optimising day-to-day operations.
Discover B2B Marketing That Performs
Combine business intelligence and editorial excellence to reach engaged professionals across 36 leading media platforms.
Currently, AI-integrated workflows are centred on a human-in-the-loop approach, whereby a human reviews each step of the process, said Piotr Maślak, senior director and head of emerging technologies at AstraZeneca. The goal is to progress to a human-on-the-loop strategy, in which a workflow is supervised by a human rather than having them assess every output, he added. The credibility of AI-based tools is central to this evolution towards a more automated process, says Maślak.
The “real” applications of AI in clinical trials include patient screening, criteria intelligence, data mining, molecular matching, and document intelligence, said Maślak. Investable strategies for future benefit include agentic orchestration and the use of digital twins as synthetic control arms, he added. However, other applications like autonomous trial design and execution are more speculative, he adds.
Throughout the conference, which took place in Barcelona, Spain, 6–7 May, the effective and ethical implementation of AI emerged as a key theme.
Strategies for effectively integrating AI into the workplace
AI offers an opportunity to streamline and accelerate operational processes in clinical research, which are often hindered by inefficiencies, said Maślak. “We are drowning in science and starving for execution,” he added. Nonetheless, small incremental gains in everyday activities can have a substantial impact, he says.
Currently, the best use cases for AI integration are in workflows with high burden and repeatability, says Maślak. Companies should “start small” with simple and low-risk processes where benefits can be gained fast, he adds. He warned against a so-called “pilot trap,” whereby companies tend to launch numerous disconnected pilots, such as AI chatbots, without an effective operating model.
Such pilots often fail due to weak talent plans, change management, or loose governance, he explains. To illustrate this, he cited a McKinsey survey of life sciences companies that claimed to have used generative AI, of which only 32% have taken steps to scale the technology, and just 5% viewed it as a key differentiator driving value.
Speakers in a separate panel discussion at the conference emphasised the importance of training employees to use AI-based tools. “We have solutions, but on the other hand, we have to learn how to use them,” says Kamil Sitarz, COO and management board member of the oncology biotech Ryvu Therapeutics.
Despite its importance, AI-based training may be challenging from a resource perspective, especially for smaller companies, says Sitarz. Another McKinsey report suggest that training people to use a new technology is five times as expensive as the tool itself, says Maślak.
Workflows best suited to early AI implementation
The strongest evidence for AI use lies in its application for participant recruitment, says Maślak. Matching systems such as TrialGPT have reduced screening time by ~42%, while OncoLLM has reduced review time down to 3–12 minutes and increased accruals by up to 39%, he added.
At AstraZeneca, Maślak and his team are integrating AI into some simple and repetitive procurement tasks in vendor selection to streamline processes and reduce burden. This includes using AI to analyse structured data like cost-based information, which is more easily comparable and is mostly utilised in lower-risk contracts, says Maślak. Additionally, AI can identify missing information in a proposal based on a set list of criteria to accelerate requests for additional information and reduce back-and-forth communication, he adds.
Meanwhile, as a small biotech, Lund, Sweden-based Alligator Biosciences is likely to purchase AI solutions from a vendor rather than build its own, says Karin Nordbladh, the company’s director of clinical operations for immune oncology. Alligator is still in the exploratory phase of integrating AI into daily work, with some employees who are more technically skilled pioneering use internally, and sharing learnings with others, said Nordbladh.
Importance of transparency and consent
According to a survey conducted by the Center for Information and Study on Clinical Research Participation (CISCRP) in 2025, 75% of 12,887 respondents reported that they would be “somewhat” or “very” comfortable with AI being used to analyse their medical data. However, 89% said that they think it is “somewhat” or “very” important that the use of AI in clinical research be disclosed.
These findings are particularly pertinent because trust in pharma remains low and lags behind other organisations involved in clinical research, said Behtash Bahador, senior director of community engagement and partnerships at CISCRP. Only 18% of respondents said they had a lot of trust in pharmaceutical companies conducting clinical research, compared with 43% reported for government research organisations. Given these findings, “the last thing we need is losing even more trust by not disclosing the use of AI,” says Bahador.
Patient consent for participation in clinical studies includes a detailed and accessible explanation of exactly how AI is used, notes Blanka Hezelova, associate director at GSK. Some patients may not want their data to be analysed by AI, she adds. Furthermore, AI is known to introduce algorithmic bias, meaning not all participants experience AI-supported tools in the same way. Therefore, patients who perceive themselves to be at a higher risk of bias, such as minority groups, may be less willing to consent to research involving AI, which may reinforce existing inequalities, she explains.
In the AI-assisted vendor selection programme at AstraZeneca, Maślak says that a comprehensive agreement details which data will be processed by AI and how this is conducted. This transparency in the process eases potential vendor concerns related to data security, he adds.
Interested in attending or sponsoring OCT Europe? Complete the form below and the Arena team will be in touch.
