Course, Meeting and Conference on Cluster Clinical Trials
6-7th October 2015
Royal Statistical Society Medical Section
Errol St, London EC1Y 8LX
6th October 2015
10.00-13.00 Course: Introduction to the design and analysis of Cluster Randomised Trials.
MJ Campbell and SJ Walters.
This course is intended as an introduction to the issues of how to design cluster trials, including how large they should be, and also how to analyse them. It can be attended as a standalone course or as an introduction to the afternoon meeting on Cluster Trials
Cost: Concessionary Fellow £40, RSS Fellow £45, Non-fellows £75 (Includes lunch afterwards)
6th October 2015
14.00-17.00 Medical Section Meeting: Cluster trials (see end of email for abstracts)
Design and sample size for Stepped Wedge Trials
Andrew Copas and Rumana Omar, University College London
Establishing an upper bound on the optimal cluster size in cluster randomised trials with a limited number of clusters
Karla Hemming, University of Birmingham
The Implications of Differential Clustering for the Analysis of Cluster Randomised Trials
Chris Roberts University of Manchester
What use are pilot studies for cluster randomised trials and how big should they be?
Sandra Eldridge, Queen Mary College, University of London
Cost: RSS Fellows and concessions free, Non-fellows £25
7th October 2015
9.30 to 12.30 Conference: Current developments in cluster randomised trials and stepped wedge designs
This meeting will consist of a series of short talks and discussion. If you would like to give one of the talks please send an abstract of 300 words [including title; authors; contact details and affiliation for presenting author] [log in to unmask] by September 11th 2015.
Cost: All £10, payable on arrival
Further details available from Mike Campbell m.j.campbell@sheffield .ac.uk
Booking: www.rss.org.uk
Abstracts for 14.00-17.00 6th October 2015
Medical Section Meeting: Cluster trials
Design and sample size for Stepped Wedge Trials (Andrew Copas and Rumana Omar, University College London)
The Stepped Wedge Trial (SWT), a variant of a clustered randomised trial (CRT), presents additional complications with regards to design and analysis which should be addressed. There is limited guidance on the design of SWTs. Current methodological literature focuses mainly on trials with cross-sectional data collection at discrete times, yet many SWTs do not follow this design. From a review of recently published SWTs we identified three main designs: those with a closed cohort, an open cohort, and a continuous-recruitment, short-exposure design. In the first two designs, many individuals experience both control and intervention conditions. In the final design, individuals are recruited in continuous time as they become eligible and experience either control or intervention condition, but not both, and then provide an outcome measurement at follow-up. We also performed a review of analytical methods for sample size calculations for SWT and how sample size has been calculated for SWTs in practice.
SWT designs and methods of sample size calculations should be reported more clearly. Researchers should consider the use of stratified and/or restricted randomisation. SWTs should generally not commit resources to collect outcome data from individuals exposed long after the end of the rollout period. Though substantial carry-over effects are uncommon in SWTs, researchers should consider their possibility before conducting a trial with closed or open cohorts. Simulation based methods for sample size calculations offer more flexibility by relaxing assumptions as well as incorporating the features of the SWT design in the sample size calculation more appropriately
Establishing an upper bound on the optimal cluster size in cluster randomised trials with a limited number of clusters (Karla Hemming, University of Birmingham)
Cluster randomised trials (CRTs) are frequently used in health service evaluation. It is well known that increasing cluster sizes leads to diminishing returns in terms of increases in power. That is, at some point, increasing the cluster size will result in very limited increases in study power, to the extent that the recruitment and enrolment of additional participants will be futile at best, and possibly unethical.
We demonstrate how in some studies large cluster sizes have resulted in trials in which many participants are not making a valuable contribution to the power of the study. We make the case that current recommendations for optimal cluster sizes, are only applicable when the number of clusters to be included is flexible. In the case of a fixed number of clusters the sample size question of interest is to determine the number of participants needed in each cluster. As yet there is no recommendation for what the optimal maximum cluster size should be when the number of clusters is fixed.
Our starting point to determine what the optimal maximum cluster size should be is the requirement that each participant makes a worthwhile contribution to the power. We thus explore the incremental power provided when each cluster is increased by size one. Using the limiting value of the precision achievable in a trial with a fixed number of clusters we discuss alternative potential optimal maximum cluster sizes.
When designing a CRT, for ethical and funding reasons, it is important to ensure that clusters are not so large that many of the participants do not contribute towards the power of the study. Trials are currently being funded and implemented in which not all of the participants are making a worthwhile contribution. There is therefore a need for practical recommendations for what the optimal maximum cluster size should be in a trial in which the number of clusters is fixed.
The Implications of Differential Clustering for the Analysis of Cluster Randomised Trials (Chris Roberts, University of Manchester)
Statistical analysis of cluster randomised trials generally assume that the intra-cluster correlation coefficient (ICC) is the same in all arms. This assumption is justified by randomisation, where the clustering effect is due to the baseline characteristics of the subjects in each cluster. If, instead, the magnitude of the clustering effect is caused by the intervention, the ICC may differ between trial arms. The robustness of methods of analysis of cluster randomised trials to differential clustering will be considered.
What use are pilot studies for cluster randomised trials and how big should they be? (Sandra Eldridge, Queen Mary College, University of London)
It is relatively common for cluster randomised trials to be preceded by pilot studies. These studies range in size and can have a variety of different objectives. This presentation will discuss some of the key objectives and how large a study needs to be to adequately address these objectives, including recent work that explores the usefulness of pilot studies of cluster randomised trials for assessing key parameters for input to the future trial sample size calculation and for estimating rates such as recruitment rates.
You may leave the list at any time by sending the command
SIGNOFF allstat
to [log in to unmask], leaving the subject line blank.
|