Last summer I have applied for a NIHR Research Methods fellowship. Earlier this week the results have come out and they have liked my proposal, which is of course great news.
The idea of this project is to critically evaluate the stepped wedge design (SWD) in clinical trials. This is a relatively new design, effectively an extension of cross-over design, in which a given intervention is rolled out in clusters that unilaterally switch treatment at different time points. The first time point usually coincides with a baseline measurement where all the clusters are assigned to the control arm. Subsequently, clusters begin to receive the active treatment, but, unlike in a standard cross-over trial, once the intervention is given, it is not removed. The time at which the intervention is started is randomly determined.
This on the one hand typically increases the duration of the study (because several time points are usually needed to reach a fixed level of statistical power); however, on the other hand, the SWD has shown the potential to be more efficient than standard cluster randomised (CR) designs.
But of course, much as for standard cross-over designs, the actual gains depend on specific settings and parameters specifications (eg in terms of the the number of clusters and time points; the clusters size; the level of correlation between measurements in the same cluster and across time). So we'll try and investigate these issues and see under which conditions the SWD works better than other strategies. As part of the proposed outputs of this research, we have indicated that we'll produce a toolbox (in R) to perform sample size calculations and guide the analysis of the actual data.