Do you run your trials by fact?

Most clinical trials operate in a decidedly non-Quality by Design manner. When protocols are designed and studies are conducted, there is no systematic approach to determining how likely the end result is to be successful. Using further deduction, there is therefore no substantive understanding via quantification of how each component of a study will lead to timely, error-free completion in a repeatable and predictable manner. And yet there should be this level of confidence; Clinical trials shouldn't ever be a black box where the outcome is unknown until it occurs. Will it be error free? Will it be completed in a timely manner? What if I asked the question: "How do you know if your clinical trial process will deliver what you intend it to deliver?" The exact wrong answer is "I'll know it when I see it." Hope is not a viable business strategy. Predictive analytics are only as good as the models used to formulate them; and these models often don't properly account for real-time effects to the study. Also, as quantum physicist Niels Bohr said, "prediction is difficult, especially about the future." Using a predictive model, to find out only when your trend goes off-track that the model was wrong, is not a way of guaranteeing success. And maybe more importantly, it's not a way of understanding all of the factors that go into successful completion of a verifiable study data package.

Adding reviewer steps isn't designing quality in

Within trials, there are often manual steps to ensure that data are populated appropriately and properly. These layers of inspection are, by their very nature, lagging indicators of where frequent defects occur in the process. Just like the web of a spider is often found in corners and near windows and doors where the most probabilistic ingress of other pests is, the inspection steps are designed to net and screen-out mistakes and omissions before they can lead to a more significant issue. A look at thousands of processes across a variety of businesses has shown me that inspection steps are put in place as a preventive measure where errors occur most commonly. However…

Inspection is not prevention

Though many times these serialized manual review steps are described to be in-place to 'ensure' safety and quality, those are misappropriated rationales upon which to base manual reviews. In most cases in healthcare and outside of the pharma and healthcare industries, when a problem has a high magnitude of consequence and is not tolerated, processes are designed-in to prevent the defect in the first place – and this means taking manual inspections out of the process and instead relying on preventive measures at the source (i.e., Quality at the source) to disallow missing entries or incorrect entries from occurring. In fact, the credibility of clinical trial science is only as good as the integrity of the underpinning data. The Shared Health and Research Electronic library (SHARE) is a form of standards-based automation. Systems like this prevent errors in data content by the design of the system, and so the data which enter the protocol are more accurate and reliable. I've conducted substantial research in the past on the psychology and performance of manual reviews (inspections), and the problems that result from diffusing more and more responsibility away from properly designing a process by adding more reviews. The real solution is to improve the design.

Designing IN quality

Here's an example: Within trials, there are different degrees and types of discrepancies, deviations, and protocol deviations, which are ways to collect information about how the process is not performing properly. But most companies don't get too much further than collecting a litany of data around what has gone wrong – and haven't systematically examined the types of defects for an indicator of Quality deficiencies, nor implemented substantial measures to actually change how the trial process operates. Anytime changes are instituted to a clinical trial process, they are done as a result of trying to shrink timelines or correct deficiencies from the last iteration of the study process. And both of these attempts simply represent a one-factor-at-a-time (OFAT) approach, which is the least effective way to design a process. The one-factor-at-a-time approach limits the ability to simultaneously appraise several or more factors operating with simultaneity in a multifactorial process. By just adjusting one factor at a time, synergies and factor effects operating in confluence are not allowed to be represented — This is exactly what the data have been showing for years now in the design and correction of clinical trial processes – That these processes are not designed or improved in a systematic way, and that even root cause analysis approaches to improve their outcomes, whether it be quality, speed of delivery, defect reduction, or error prevention, are done in a manner that is point-by-point.

In some of the most effective clinical trial improvement paradigms that have been implemented in the industry, I have consciously designed them specifically with approaches using Quality by Design. In fact, there are even recursive ways [about] and [within] a clinical trial that can use Quality by Design: The actual clinical trial and study designs themselves ('about' the trials – are they instituted in such a way that gives deference to the repeatability of the process?), and 'within' the trials – an approach which looks to optimize the outcomes of the studies based on what is known about the contributing components to error variance.To minimize this means that the data which can then be analyzed will more accurately reflect the actual study data; Therefore efficacy signals are easier to appraise and respond to (favorably), and null results can be reacted to much more expeditiously to cancel further study, which saves, generally speaking, tens of millions of dollars. I have written in the past about ways in which approaches to QbD and risk assessment in clinical trials can be adopted depending on the particular area of focus or problem to be resolved (1).

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Readers should discuss what particular challenges they are experiencing with their current paradigms, and what these challenges are currently leading to in the business.

 

Author Bio

Ben Locwin, PhD, MBA, MS, MBB is an author and speaker in the healthcare industry and consults with companies in pharma, biotech, emergent care, and academia. He has provided expertise for re-evaluating and improving the outcomes of clinical trials worldwide.

 

References
1. Locwin, B. (2014). QbD and risk assessment move into the clinical space. Contract Pharma. http://www.contractpharma.com/issues/2014-07-01/view_columns/qbd-and-risk-assessment-move-into-the-clinical-space/
2. European Medicines Agency. (2013). European Medicines Agency and US Food and Drug Administration release first conclusions of parallel assessment of quality-by-design applications. http://www.ema.europa.eu/ema/index.jsp?curl=pages/news_and_events/news/2013/08/news_detail_001876.jsp&mid=WC0b01ac058004d5c1