Evidence suggests that cluster randomised trials are often poorly designed and analysed. A solid recent example of this is a Spanish trial testing vitamin D as a treatment for Covid-19.

The trial’s positive results in early 2021 initially generated some excitement about vitamin D as a potential remedy against Covid-19 – including from UK Conservative MP David Davis – but the data was later removed from publication after a backlash from scientists and researchers on Twitter.

Many holes were found in the reporting of the study but perhaps the biggest was in the randomisation.

Cluster randomisation

This particular trial went down the cluster randomisation route, a randomisation technique widely used but one that can have a number of serious pitfalls if not conducted correctly. Patients in clinical trials are usually randomised at the individual level, but CRTs are randomised using different units of measurement than the individual participants themselves. Clusters might be social groups, communities, administrative areas or – in the case of the Spanish trial – hospital wards.

Cluster randomisation has gradually grown in prevalence since the 1980s, and has a number of advantages as a trial design. In healthcare, CRTs can be useful when assessing the impact of health policy and interventions at the health system level.

They can help reduce the possibility for ‘contamination’ where trial participants are expected to come into regular contact with each other, and can make large-scale studies – for example, around the efficacy of national screening programmes – less complex and costly to deliver. CRTs are less common when studying the physiological effects of specific interventions, partly because a range of issues (if they aren’t managed) can raise doubts in the data, starting with sample sizes.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The Spanish trial reported that it had a very large sample size of over 900, however, the researchers only randomised eight hospital wards (five of which saw patients dosed with vitamin D while three did not) meaning a much smaller effective sample size – somewhere between eight and 900.

“Your effective sample size depends on the extent to which people treated within particular wards have a different outcome from people treated in other wards, so it’s very hard to know exactly what the effective sample size was,” says Sandra Eldridge, professor of biostatistics at Queen Mary University of London and director of the Pragmatic Clinical Trials Unit at Barts and The London School of Medicine and Dentistry. “It certainly wasn’t as large as what they say in the study, meaning that all the results they gave are much too precise.

“Basically, it was an incorrect analysis of a cluster-randomised trial.”

Risk of recruitment and identification bias in CRTs

Professor Eldridge was one of the experts to debunk the findings of the Spanish trial and has spent much of her career working to publicise the potential pitfalls of flawed CRTs.

Aside from having a deceptively small sample size, another issue that can arise from conducting CRTs is what Eldridge calls identification and recruitment bias.

“This means if you recruit the individuals into the trial after you have randomised the ward, you can get bias because if people know whether certain wards are intervention wards, and certain wards are control wards, then they can influence which wards the participants go into. So you’re not really comparing like with like, and you can’t really get around that problem.”

As Eldridge has noted, the Spanish vitamin D study may have assigned patients to their various ward clusters using staff who knew which wards would be giving the supplemental vitamin D treatment to patients, and which wouldn’t. It’s a pressing reminder of the importance of double blinding in clinical studies, where neither patient nor caregiver is aware of which treatment they’re getting, to mitigate the risk of unconscious bias.

Another consideration in a hospital setting is the varying performance of the wards and clinical staff that make up the trial clusters, and how this can affect trial outcomes.

“We all know that individuals can be very influential in many areas of life,” says Eldridge. “If in a ward you have somebody in charge who is very creative or very innovative, they might do some things just off their own back that does actually make a difference to patient outcome. So some wards may do that more than others, and then you get this variation in outcome between wards.”

Publicising the complexities of CRTs

With CRTs representing a growing and often useful trial design option, Eldridge says that more and more people are using them without background knowledge and without knowing what the issues are.

“In the UK, amongst certain groups, particularly primary healthcare researchers, the possible flaws in CRTs are well known, because the trials have been common in these communities for a long time,” says Eldridge. “In other countries or in other specialities and hospitals rather than in primary care like GP practices I think the issues are less well known.”

Eldridge has done a lot of work to try and get these issues written about in the mainstream journals rather than in statistical journals to serve other communities. “There are many papers in the BMJ and probably in the Lancet too – high quality, peer-reviewed journals – but if you don’t read those journals you won’t know about the issues,” says Eldridge.

“The only way that we are going to get around that is just to keep publicising these sorts of issues. Naively some of us thought that if we wrote about all this stuff earlier in the century, then people would catch on. But I suspect, seeing this trial and others, that’s not the case and we need to do more work about getting the message out there.”

Eldridge says that the knowledge she and her colleagues in the primary health community have about the problems with CRTs come from attending conferences around these issues.

“That is the sort of thing that needs to happen in all sorts of different areas,” she says. But amid a global pandemic, with the flurry of clinical research activity to find new treatments and vaccines for Covid-19, there’s a higher risk that hurried trial sponsors might not identify these risks, or cut corners in trial design to produce faster results.

“In the area of Covid, where everybody wants to do research at the moment, in some cases you will likely get some of these flaws,” Eldridge says. “Because the people doing these trials have to do it quickly and they’ve not been to many conferences where they might have been exposed to these issues, or read any papers about them in their own particular clinical area.”

Despite their complexities, CRTs have valuable applications in many areas of clinical research. It is clear that the common issues with these trials, like those seen in the Spanish study, come from a lack of understanding of how best to conduct them and barriers to information to educate on mistakes that have been made in the past.

If CRTs are to continue their growing role in clinical trials, more needs to be done to publicise the complications that can come with cluster randomisation to avoid repeated cases of flawed trial data, and the subsequent misleading results, which may do more harm than good.