In the aftermath of the CDISC eSource Data Interchange Document, the eClinical Forum (eCF, http://eclinicalforum.org/) has been working since 2006 to aggregate functional and quality criteria applicable to electronic source systems using the predicate Regulatory and International Standards references. After reflectively reviewing them, the Forum successfully compiled and disseminated these User Requirements in the form of :
- Functional Profiles (approved by Ansi , HL7 and the EuroRec institute in Europe)
- Detailed and acclaimed Practical Considerations white papers
Further to the eCF historical members (Pharmaceutical Sponsor or CRO companies), this work leveraged direct and decisive input by major Healthcare System (EHR-S) vendors, academia, regulatory agencies and standards development organisations (CDISC, HL7, HITSP, ISO TC 215, CEN TC 251).
The cross-functional and multidisciplinary team and its associated deliverable s are known as the Electronic Health Records for Clinical Research project (EHRCR, http://www.eclinicalforum.org/ehrcrproject/en-us/home.aspx). This work became a key reference for any researcher or project manager wishing to understand the challenges of the use of electronic systems for storing research source data; it directly influenced regulatory thinking when it comes to managing data acquisition related risks and is referenced as such extensively in the Society for Clinical Data Management eSource White Paper – released in the beginning of June 2014.
It all sprung from one foundational requirement:
Good Clinical Practice requires that clinical trial data can be verified against its source
Guidance from the EMA and FDA in 2010 states that sponsors should be responsible for the compliance of systems to GCP, if used to originate clinical research source data. That included hospital systems – despite the fact those systems are not governed by the Clinical Research regulations and applicable legislation. This places a new type of burden on both sponsors and investigators. According to this interpretation, they are responsible for evaluating every EHR system which contains source data for a clinical trial, regardless of whether the system was conceived for this purpose or not. At the same time the minimal legal requirements associated with quality, validation and security of healthcare systems are far from being uniform between the ICH geographies; sometimes even regional/district requirements might differ within the same state or country.
The FDA in particular clearly states that those systems that are not under the sole control of the Sponsor or the Investigator are not subject to 21 CRF Part 11 requirements. This apparent "contradiction" is not meant to deter from the use of electronic systems; it actually underpins that the purpose of evaluating electronic source systems is not to impose the strict requirements of GxP, which in many cases are beyond the original system requirements and therefore currently unfeasible at large scale. In the US, those systems are otherwise governed by a different set of laws (under 45 Code of Federal Regulations – Section 170 in particular). All major regulatory agencies have rather confirmed that Risk Assessments with an impact on the protection of patients’ rights and safety must focus on the corroboration of the reliability of the source clinical data and reproducibility of their "chain of custody". So Sponsors, alongside the regulators themselves need to prioritize those sites which present the most "worrisome" negative concurrence of system and process shortcomings in order to assign the right level of monitoring, auditing (and, why not, inspections). This paradigm shift should render obsolete the need for exhaustive "forensic" visual verifications of paper notes.
Those who have been vocal in the preparation and dissemination of the Agencies’ current thinking when it comes to Risk Based Monitoring also defined eSource expectations.
The challenge is of course how to create consistent and comparable risk indicators on systems that are so disparate and multiform and, what’s more, not consistently regulated nor subject to the same technical standards among ICH countries.
Funnily enough, what generated anguish among the technical experts on semantic interoperability between electronic systems for the past 25+ years, provided a solution when it comes to risk characterization:
While the technical interoperability standard references are vast and not always consistent between the US and Europe (thus rendering direct data interchange between systems impossible to implement at large scale now), the mapping of the EHR-system functional/quality criteria to clinical research requirements was both feasible and actually already compiled and made publicly available by the EHRCR team.
In looking in the User Requirements created for what was called "Tier-0" for allowing systems to be interoperable, the EHRCR team realized that they were simple, granular and consistent enough to be transcribed to a short questionnaire that sites and system vendors could be expected to be able to respond to now.
What’s more, these questions show the common denominator between the regulations and standards which apply to electronic healthcare and clinical research systems, without necessarily expecting or imposing first-hand verifications by the monitor, auditor on inspector against every single criterion. Study nurses, CRAs or GCP auditors are usually not informatics engineers anyway, but precisely because the same questions will be consistently asked, the "scoring" obtained by aggregating the information on a specific protocol or program site population allows for an objective comparison and prioritization of underperformers or otherwise suspicious sites.
It then became self evident that the advantages of sharing this information among research Sponsors and CROs (in line with applicable data protection regulations of course), far outweigh any apprehension related to competitive intelligence: an understaffed, unqualified, or fraudulent site often impacted the development programs of more than one sponsor. Sharing the same data even with the regulatory agencies in the interest of transparency and even considering joint audits/inspections has been hailed as a step to the right direction.
However, a common, comparable eSource quality assessment would not only be a "policing" tool for Sponsors and Regulators. Further to the obvious benefit of not having to fill in multiple such initiation assessments for every new study, clinical program or sponsor, the sites themselves will be able to more easily obtain (through vendor pre-fills and Healthcare Certification schemes) the answers to difficult technical questions as well as gain insight to how they perform compared to other research centres.
Ultimately, regulatory compliance when it comes to secondary uses of clinical data will definitely be a significant competitive advantage for EHR-system vendors; in particular because this type of objective quality benchmarking comes without any aggressive commercial connotations.
What next? An online assessment and a database for QA
The Investigator eSource Readiness Assessment Tool (eSRA) is currently in the process of being digitized and will be freely available through a website, hopefully within 2014 or early 2015. By design it aims to identify the parties involved in quality risk assessment of systems and processes associated with Source Data collection at the site and meets the objective of reproducibility of the chain of custody. By default it can become essential to Site Initiation – and provide the cornerstone for an overall Site Risk Assessment. The Tool attracted positive attention and remarks by the FDA eTeam and the EMA GCP Inspectors Working Group to whom it was presented last year. Both agency representatives groups acquiesced to its value but also stressed their expectation that the eCF actively works towards operationalising its finalisation, availability and use, in a manner to decrease the extent of any possible additional administrative overheads for Investigators and Site Personnel.
Further to the functional criteria, organized as granular statements, the eSRA Tool will be accompanied by role specific handbooks (Sponsor/CRO/Monitor, Site, EHR vendor) including:
- Usage Instructions
- Comprehensive Regulatory and Technical References (regardless of whether they are binding or guidance-only)
- Glossary of terms
- Definitions of concepts associated with eSource Appropriateness
The questions associated with the system characteristics and version, as well as implementation and customization brought at the site’s request, allow for fully comparable risk evaluations grouped in the following 5 compliance groups:
- Identify records which will be a source for Clinical Research protocol(s)
- Establish the existence of an electronic audit trail
- Describe Access Controls (data privacy, security, documentation of consents and authorisations)
- Copy/Backup, disaster recovery, data integrity
- System development & maintenance – triggers for reassessment.
The tool actually also provides "tips" (suggestions and the context) for acceptable workarounds in case of system or process shortcomings. It therefore constitutes a convenient baseline for Corrective and/or Preventive action (CAPA) wherever appropriate.
The timeliness of adjustments on tools and associated site processes will not only help determine eSource quality; it will contribute to the emergence of a collective learning curve for sites and those who monitor them with huge benefits starting with recruitment and initiation but also retention of the higher quality sites.
The creation of consistent, corroborated and comparable data will also provide an additional non-negligible by-product: A dataset or registry of eSource quality. As this dataset will gain in size and representability, it is highly likely to become the first stop for all Quality "Analysts" (be it Pharmaceutical or Academic research sponsors, Government/Regulatory investigators, Drug Payers and Patient Advocacy groups).
In all cases the eCF is committed to actively seek regulatory and industry perspectives in maintaining and subsequently releasing each version of the common assessment tool. Most importantly, this effort discreetly but steadily presages a realistic and non-invasive transition to an integrated eHealth/eClinical environment. One which we can start to trust.