The concept of risk-based monitoring is currently a widely discussed topic in the clinical trials arena. Three major organizations are promoting the concept and each defines it separately in their documents (Transcelerate RBM workstream, FDA, EMA). Although there are some differences in the concepts promoted by these organizations, the major changes compared to traditional monitoring are fairly common:

  • Overall study risk assessment
  • Focus on Critical Processes and Critical Data with special attention to study specific areas,
  • Early and ongoing risk assessment
  • Use of Risk Indicators and Thresholds
  • Optimal "monitoring-mix" – central, remote, on-site
  • Central Data Monitoring – early and continuous central oversight of data
  • Central statistical monitoring
  • Adjustment of monitoring activities based on the issues and risks identified throughout the study
  • Source data review

The expected benefits are also recognized widely: reduced cost of monitoring and maintained (or increased) data quality.

In this short article, I will present key trends, which I believe, we’ll see in this area in the near future.

Concentration on study specific areas of concern

Despite the fact that study specific risk assessment and tailoring the monitoring plan to a specific study are key theoretical considerations of RBM, I think that some pharma companies, and a majority of CROs, are implementing RBM strategies that are quite generic. I am basing this notion on many informal interactions (with colleagues), as well as the market review of informatics systems for CM done recently by my company.

The generic approach in this context means that pharma companies are building a pretty narrow library of generic risk indicators, based mainly on operational data and some limited clinical data (e.g. recruitment rate, DM queries, protocol deviations, blood pressure variability, AE frequency). These risk indicators are then used to calculate an overall risk score of sites with the monitoring level adjusted based on those scores. The upfront identification of study specific risks and setup of therapy area or study specific risk indicators is limited. I believe that such approaches carry the risk that they will fail to identify the true risk areas that can invalidate the study results.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Let’s take a case study, a relatively recent major failure that could be attributed to monitoring – Rivaroxaban. This drug was denied US registration in ACS, partially because of the low quality of data generated. Reportedly, with the biggest problem being missing data on patients lost from observation, it is questionable whether a generic risk-based monitoring approach, lacking a deep analysis of study risks, would have been able to prevent the failure.

Also, in the experience of my team of central monitors, it is apparent that the study specific risk indicators and metrics (e.g. median time from randomization to dosing) gather the attention of key decision makers in these projects.

I think the reason for many companies taking the described ‘generic shortcut’ is largely technical. One of the main difficulties with RBM implementation is the need for a centralized monitoring system on a platform of integrated data. Obviously, the creation of a data platform, integrating only a fraction of data that is quite stable between various studies, is much easier than the integration and mapping of huge clinical databases. Also the interactions needed to identify study specific risks require reaching out via organizational boundaries, which is always difficult in big organizations.

Despite the technical difficulties, I believe that the future of RBM implementation in our industry depends on a significant concentration of monitoring efforts on study-specific risks. The central monitors should play a key role in aligning the monitoring plans to the most significant study risks. This will require engaging key decision makers (mainly Medical Directors) in risk discussions in the early stages of study setup. One of the main aims of these discussions should be the definition of tolerance limits for the key risks. Dedicated specialists should then translate the risk discussions into monitoring strategies, including data sources, measurements, thresholds, actions, escalations, and relaxations.

High frequency, high detail, automated centralized monitoring

Centralized monitoring is a key component of all RBM strategies. It allows tracking of the known risks, identification of the new risks, outlying data, underperforming sites, and so forth. In order to address the issues identified, the findings have to be communicated to monitors or central study teams.

The CM and RBM strategies implemented throughout the industry differ significantly in the frequency of analysis, level of detailed information conveyed to monitors or study teams, and the level of automation.

On one side there is a "low frequency, low automation" model. In this model the central monitors are doing the analysis using visual inspection of risk indicator-related data visualizations. Based on these, they select the high-risk sites where monitoring should be increased. The frequency of analysis is more or less monthly with only a fraction of sites being classified as needing action.

On the other end of the scale, there is a high frequency automated model that we explore at AZ (AstraZeneca). Here, requests for monitoring actions are based on an automated analysis of data. The action requests are very specific (i.e. Instruct site to dose closer to randomization). The central monitor in this model plays the role of overseeing automated processes while looking for new trends rather than analyzing the well established ones. Of course the risk is the generation of too many meaningless actions, which is why the model requires an ongoing review of action correctness and effective monitors’ feedback channel.

Going forward, I think the industry should continue exploring the high frequency automated model due to following reasons:

  • This model is able to handle a higher number of risk indicators allowing the implementation for TA – and study-specific ones
  • The model allows more significant reduction of "untriggered" monitoring as the sites and data is monitored centrally more frequently and more closely
  • As the monitoring action requests are detailed and specific, it decreases the level of skills needed from monitors’ to address the requests

Expected benefits

To remain a sustainable business, the pharma industry must effectively bring new medicines to market. In order to achieve this, we need to change many aspects of drug development, including a decrease in the cost of study monitoring to avoid rivaroxaban-like failures.

The benefits from the implementation of RBM will address both aspects. RBM by reduction of on-site monitoring, centralization of data review and prioritization of data reduces the cost of monitoring. At the same time aligning the monitoring effort to key study risk will decrease the risk of regulatory filings being turned down on data quality issues.

 

*Marcin Makowski is the Associate Director, Centralized Monitoring at AstraZeneca