In support of OpenTrials

OpenTrials could be the best thing to happen to obtain accessible, usable results since FDAAA 2007 section 801 required the expansion of clinicaltrials.gov to include results.

It’s now just over one year since Ben Goldacre’s enthusiastic rollout of OpenTrials[1]. OpenTrials launched their beta[2] in October 2016. Targeted searching reveals placeholders for much information, such as protocols, clinical study reports and results. Where these items have already been located, the links through to them work neatly. For instance, links from a trial record to the EudraCT results page. What appeals to me though is the potential for linking to usable and detailed clinical trial results from this accessible platform. However, the way in which OpenTrials will enable access to results beyond those available through clinicaltrials.gov[3] and EudraCT[4] have not yet been defined.

Researchers need numbers, not just PDFs

In designing clinical trials, you need an understanding of the efficacy of the current standard of care and the emergent efficacy of novel treatments. You get that understanding through reading the papers. You also need to compare the results, typically using a forest plot[5]. That’s why getting to the numbers and descriptive information in tables is so vital. And that’s why so many of us spend time refining our population descriptions and transcribing results from PDFs to Excel.

Thanks to legislation in a number of countries, clinical trial results are more accessible than ever. But what’s available often falls short of what we need. Particularly for precision medicine approaches. Here, the results of interest may be deeper in a paper than the top line results in the trial record. And though the results can be scraped from, for example, clinicaltrials.gov or accessed through an API, these methods don’t always elicit the data you need for your work. Even the recent FDAAA final rule does not provide for the publication of results that you or I could easily pop into an analysis without some data wrangling.

What could help is an open curation platform for clinical trials results. Now, researchers spend time curating their results individually. Often for good reason, each project is unique. But the source publications can often be the same. With the right controls, the possibility of being able to do this in an open environment with common standards has potential. Providing such an environment should encourage more analysis and less repetitious results transcription.

Why we need OpenTrials

While the OpenTrials agenda is broad “…all publicly accessible data and documents, on all trials conducted, on all medicines and other treatments, globally”, it does present a place to concentrate efforts on increasing the accessibility of usable results. It acts to draw together the various sources of clinical trial information and in time could offer ways to supplement what’s already there.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Curating results from publications

To curate and share results from publications you can use a service like COVIDENCE. Though most of us turn to ‘the cockroach of IT’ – Microsoft’s Excel – it works. But it can be the case that even people in the same group will curate the same results into different Excel spreadsheets. This happens across industry and academia. Time consuming, tedious, and redundant.

Even where your culture and infrastructure supports storage and reuse, the systems are often built for specific purposes and can have limited interoperability. The metadata standards and quality differ, data standards differ. This can give you small temporary collections of data that serve their purpose. Then they either disappear or add to the noise of finding a definitive source in searches. Developing and maintaining these systems takes effort. With an open approach we could spend that effort on analysis and curation.

There are other options

There are several options to help researchers with getting the information they need in a form that they can use. Information organizations like Clarivate Analytics (formerly Thomson Reuters IP & Life Sciences), Informa, and Elsevier will support this kind of curation. Service providers like EPAM, Accenture, and Cognizant can help. Furthermore, specialists like Dr Evidence will provide domain expertise and value-added services too. Depending on your needs, these could be just what you want.

If you DIY you feel the frustration

If you do have access to professional help, imagine you have just a few trials of interest. There you can be on the horns of a dilemma: do you organize a specification and get the work done by someone else while you put your effort into another task? Or do you just knuckle down and curate the results yourself? And if you take the latter approach you can feel just a bit fed up knowing that someone else has likely curated at least some of the same results. Similarly, you might feel disappointed that once you've derived value from your work it'll just sit in a spreadsheet until the bits decay.

What I’m looking forward to from OpenTrials

I’m looking forward to one route to information on clinical trials. I’m also looking forward to standardization of that information. Consistency of vocabulary for descriptions helps searching, presentation, and interpretation. The amount of variety possible even in descriptions of clinical trial phases can be impressive. For example: Phase 2, Phase II, Phase 2a, Phase 2b, Phase 2a-b… Though with a linked data approach, OpenTrials may be constrained at times by ontological controls, I think this is good. These constraints should help build common standards for representing study and population information. Vocabulary controls are valuable, even in small teams where invention and typographical errors can quickly build divergent and difficult to clean data. These kinds of controls are helpful in commercial clinical trial information systems. They’re essential in the in-depth curation of clinical trial results in refined subgroups. E.g. dealing with the variety of populations like ER+, PR+, her2+ or is it ER+, PR+, erbB2+ or perhaps ER+/PR+/her2+? Definitive, clear standards help reduce confusion and wasted duplication. Though discussion and agreement on standards can be taxing even for the most committed people.

It will be encouraging to see the potentially unifying influence that OpenTrials.net brings to the current mixture of options for handling clinical trial information. And I’m looking forward to the opportunities for diverse groups to apply and extend its accessible and valuable resources.

 

[3] https://clinicaltrials.gov/

[4] https://eudract.ema.europa.eu/