Recently the New York Times reported that the FDA is investigating whether the recall of the blood testing device used in the clinical trials of the wildly successful anticlotting drug Xarelto cast any aspersions on the validity of the data used as the basis for approval. On the market since 2011, prescriptions for Xarelto have risen to roughly 4 million per year in the crowded atrial fibrillation space that includes Eliquis and Pradaxa, among other popular blood thinners. The direct to consumer appeal for all these products focuses on patient "liberty", being free from the hassles of regular blood testing and diet restrictions associated with the use of warfarin.

In 2014, three years after Xarelto’s approval, the diagnostic device INRatio®, an FDA-cleared device sold by Alere and used in the clinical trial to monitor warfarin usage, was recalled because the device understated patients’ risk of bleeding. The implications of the recall could have resulted in study subjects in the warfarin group being given too high doses leading to bleeding episodes, potentially producing a biased clinical trial result for Xarelto. The study’s sponsors, Johnson & Johnson (J&J) and Bayer, have produced an analysis of the trial warfarin monitoring data for the New England Journal of Medicine (NEJM), claiming that the results were valid regardless of whether or not the device functioned properly during the trial.

In another twist to this story, the New York Times also reported that a document surfaced suggesting that J&J and Bayer withheld important reference lab data from the NEJM, which may or may not corroborate the primary trial readings, but that is another story for another time.

How this all plays out will be interesting to watch, not least because the study director for Xarelto’s pivotal study was Dr. Robert M. Califf, recently appointed the FDA’s Commissioner of Food and Drugs. But it also brings to mind how very dependent we in the clinical research world are upon the investigators, vendors, suppliers, laboratories and other parties who conduct our clinical trials and generate the data upon which so many decisions – and sometimes lives – rely. How do we assure ourselves that all these components and providers can do what they say and produce data with sufficient quality to justify our conclusions?

The process of vendor qualification takes on many different forms often strongly correlating to the size and robustness of the clinical trial operations team and its counterparts in the quality department. Larger pharma companies with vast resources presumably have quality systems, analysts, and auditors in place who can pre-qualify preferred vendors and keep those internal certifications up to date. Smaller organizations – and let’s face it, that is the space many of us occupy – have fewer options, and it may not be a complete surprise to learn that sometimes the qualification steps are skipped "till the next study" or "when we get to phase 3".

If we can all accept however that "later" is not an adequate vendor qualification strategy, how should we organize our limited resources to ensure that our vendors – and our data – can pass muster?

Vendor qualification is simply the process by which we evaluate whether a service or materials provider can produce the services (or goods) to the standards we require. Jonathan M. Lewis and Nancy Cafmeyer of Advanced Biomedical Consulting (ABC) use the analogy of a sports team filling out its roster. The team does not sign up athletes based only on their paper statistics – points scored or blocked, run times, standing jump height, etc. – but also on how the team doctor evaluates the player’s current physical status, how the player gets along with teammates, and how much money the team has to spend on given positions.

Lewis and Cafmeyer employ very practical methodology they refer to as "Q.U.E.S.T.", or question, understand, evaluate, site audit, and track. First we find out what we need the vendor to supply – services, data, or materials – then we learn how various vendors can meet those requirements. The quality of our request for proposal, or RFP becomes very important here. If we do not ask the right questions of the vendors during these early interactions, we could be surprised by the outcomes later on. Another example of this understanding phase is a questionnaire that collects qualifications and experience of both the facility and its personnel. In our sports analogy we saw that our team budget is an important criterion, because there is no point is seeking out providers whose services we cannot afford.

Next comes the evaluation phase which can take on a variety of forms: reference checking, document review, such as batch records, audit findings responses, or SOP review. This is then followed by the site audit performed by an auditor qualified in the type of facility under review, and may need to be an independent third party, for example when the audit target is an IRB or other entity whose records contain other sponsors’ confidential information. Site audits are not always required. Depending on the services, records review and other remote activities may be sufficient.

Finally with all this data to work with, the clinical team can choose qualified vendors and then track their performance continually throughout the project, requalifying them as conditions or needs change, and addressing issues that surface to keep them from blooming into larger problems that could negatively affect the validity of the study.

David Webber describes a lifecycle management approach beginning with the development of an outsourcing strategy, defined by the sponsor’s own strengths and gaps, resulting in a list of qualifications the vendors must have. That leads to a period of identification, qualification (by various means), and negotiation. Once the vendor is qualified and contract signed, the relationship is continued by providing an adequate oversight process that encourages continuous evaluation and improvement in processes, followed by a lessons learned exercise at the close of the project. Many organizations skip this step, often simply because it involves looking back and we are always looking forward. But I suggest this is an important exercise to undertake for both parties, even if the relationship does not continue, as it can further strengthen the sponsor’s understanding of the limits and possibilities of their vendors and help the vendor continually improve its processes and product.

Although neither Lewis and Cafmeyer nor Webber specifically mention it, I want to stress the importance of the relationship between sponsor and vendor, or specifically between the individuals representing those entities. The relationships forged between these people can be a source of great resources for problem solving and continuous improvement. Of course relationships should not be the sole decision criteria – we need objective questioning and evaluating – but there is a reason why we humans often seek to work with former colleagues again: we know what we are getting and we trust the other person. That mutual trust amongst professionals is an intangible quality that defies checklists, but nonetheless has the power to help produce high quality clinical trial data.

Clinical trials are a costly and time consuming endeavor, with high stakes for shareholders, employees and ultimately patients waiting for medicines. Often we have limited opportunities to demonstrate that our medicinal products are safe and effective, and worthy of their place in the pharmacy. Sponsors must take all reasonable precautions to determine that the vendors we choose to help us produce those products are the best that they can be.