Scolaris Content Display Scolaris Content Display

Strategies to improve retention in randomised trials

This is not the most recent version

Collapse all Expand all

Abstract

available in

Background

Loss to follow‐up from randomised trials can introduce bias and reduce study power, affecting the generalisability, validity and reliability of results. Many strategies are used to reduce loss to follow‐up and improve retention but few have been formally evaluated.

Objectives

To quantify the effect of strategies to improve retention on the proportion of participants retained in randomised trials and to investigate if the effect varied by trial strategy and trial setting.

Search methods

We searched the Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE, PreMEDLINE, EMBASE, PsycINFO, DARE, CINAHL, Campbell Collaboration’s Social, Psychological, Educational and Criminological Trials Register, and ERIC. We handsearched conference proceedings and publication reference lists for eligible retention trials. We also surveyed all UK Clinical Trials Units to identify further studies.

Selection criteria

We included eligible retention trials of randomised or quasi‐randomised evaluations of strategies to increase retention that were embedded in 'host' randomised trials from all disease areas and healthcare settings. We excluded studies aiming to increase treatment compliance.

Data collection and analysis

We contacted authors to supplement or confirm data that we had extracted. For retention trials, we recorded data on the method of randomisation, type of strategy evaluated, comparator, primary outcome, planned sample size, numbers randomised and numbers retained. We used risk ratios (RR) to evaluate the effectiveness of the addition of strategies to improve retention. We assessed heterogeneity between trials using the Chi2 and I2 statistics. For main trials that hosted retention trials, we extracted data on disease area, intervention, population, healthcare setting, sequence generation and allocation concealment.

Main results

We identified 38 eligible retention trials. Included trials evaluated six broad types of strategies to improve retention. These were incentives, communication strategies, new questionnaire format, participant case management, behavioural and methodological interventions. For 34 of the included trials, retention was response to postal and electronic questionnaires with or without medical test kits. For four trials, retention was the number of participants remaining in the trial. Included trials were conducted across a spectrum of disease areas, countries, healthcare and community settings. Strategies that improved trial retention were addition of monetary incentives compared with no incentive for return of trial‐related postal questionnaires (RR 1.18; 95% CI 1.09 to 1.28, P value < 0.0001), addition of an offer of monetary incentive compared with no offer for return of electronic questionnaires (RR 1.25; 95% CI 1.14 to 1.38, P value < 0.00001) and an offer of a GBP20 voucher compared with GBP10 for return of postal questionnaires and biomedical test kits (RR 1.12; 95% CI 1.04 to 1.22, P value < 0.005). The evidence that shorter questionnaires are better than longer questionnaires was unclear (RR 1.04; 95% CI 1.00 to 1.08, P value = 0.07) and the evidence for questionnaires relevant to the disease/condition was also unclear (RR 1.07; 95% CI 1.01 to 1.14). Although each was based on the results of a single trial, recorded delivery of questionnaires seemed to be more effective than telephone reminders (RR 2.08; 95% CI 1.11 to 3.87, P value = 0.02) and a 'package' of postal communication strategies with reminder letters appeared to be better than standard procedures (RR 1.43; 95% CI 1.22 to 1.67, P value < 0.0001). An open trial design also appeared more effective than a blind trial design for return of questionnaires in one fracture prevention trial (RR 1.37; 95% CI 1.16 to 1.63, P value = 0.0003).

There was no good evidence that the addition of a non‐monetary incentive, an offer of a non‐monetary incentive, 'enhanced' letters, letters delivered by priority post, additional reminders, or questionnaire question order either increased or decreased trial questionnaire response/retention. There was also no evidence that a telephone survey was either more or less effective than a monetary incentive and a questionnaire. As our analyses are based on single trials, the effect on questionnaire response of using offers of charity donations, sending reminders to trial sites and when a questionnaire is sent, may need further evaluation. Case management and behavioural strategies used for trial retention may also warrant further evaluation.

Authors' conclusions

Most of the retention trials that we identified evaluated questionnaire response. There were few evaluations of ways to improve participants returning to trial sites for trial follow‐up. Monetary incentives and offers of monetary incentives increased postal and electronic questionnaire response. Some other strategies evaluated in single trials looked promising but need further evaluation. Application of the findings of this review would depend on trial setting, population, disease area, data collection and follow‐up procedures.

Plain language summary

Methods that might help to keep people in randomised trials

Background

Most trials follow people up to collect data through personal contact after they have been recruited. Some trials get data from other sources, such as routine collected data or disease registers. There are many ways to collect data from people in trials, and these include using letters, the internet, telephone calls, text messaging, face‐to‐face meetings or the return of medical test kits. Most trials have missing data, for example, because people are too busy to reply, are unable to attend a clinic, have moved or no longer want to participate. Sometimes data has not been recorded at study sites, or are not sent to the trial co‐ordinating centre. Researchers call this 'loss to follow‐up', 'drop out' or 'attrition' and it can affect the trial's results. For example, if the people with the most or least severe symptoms do not return questionnaires or attend a follow‐up visit, this will bias the findings of the trial. Many methods are used by researchers to keep people in trials. These encourage people to send back data by questionnaire, return to a clinic or hospital for trial‐related tests, or be seen by a health or community care worker.

Study characteristics

This review identified methods that encouraged people to stay in trials. We searched scientific databases for randomised studies (where people are allocated to one of two or more possible treatments in a random manner) or quasi‐randomised studies (where allocation is not really random, e.g. based on date of birth, order in which they attended clinic) that compared methods of increasing retention in trials. We included trials of participants from any age, gender, ethnic, cultural, language and geographic groups.

Key results

The methods that appeared to work were offering or giving a small amount of money for return of a completed questionnaire and enclosing a small amount of money with a questionnaire with the promise of a further small amount of money for return of a filled in questionnaire. The effect of other ways to keep people in trials is still not clear and more research is needed to see if these really do work. Such methods are shorter questionnaires, sending questionnaires by recorded delivery, using a trial design where people know which treatment they will receive, sending specially designed letters with a reply self addressed stamped envelope followed by a number of reminders, offering a donation to charity or entry into a prize draw, sending a reminder to the study site about participants to follow‐up, sending questionnaires close to the time the patient was last followed‐up, managing peoples' follow‐up, conducting follow‐up by telephone and changing the order of questionnaire questions.

Quality of evidence

The methods that we identified were tested in trials run in many different disease areas and settings and, in some cases, were tested in only one trial. Therefore, more studies are needed to help decide whether our findings could be used in other research fields.

Authors' conclusions

Implication for methodological research

Trialists may consider including well thought out and adequately powered evaluations of strategies to increase retention in randomised trials. This could include a clear definition of retention strategies and of measures of retention. Trialists conducting future methodology trials can consider incorporating evaluations of strategies to increase retention at the design stage so that power, sample size and funding arrangements are taken into account. Retention trials were often poorly reported without consort diagrams, clear primary outcomes, sample size, sociodemographic composition or power calculations. Considerable time was spent contacting authors for unreported data needed for a robust meta‐analysis. Trialists in their reports might consider adhering to the consort guidelines for trial reporting, which would facilitate the synthesis of results in future methodology reviews. There is less research on ways to increase return of participants to trial sites for follow‐up and on the effectiveness of strategies to retain trial sites in cluster and individual randomised trials. Research in both areas would be very beneficial to trialists. There is no current system for identifying methodological trials in progress, until a system is set up it may be useful for systematic review authors to incorporate contacting trials units into their search strategy.

Background

Description of the problem or issue

Randomised trials are the gold standard for evaluating the effectiveness and efficacy of interventions. Non‐response or loss to follow‐up within study groups in randomised trials can compromise study findings by reducing the power of a study to detect a true difference between the control and the intervention group. Differential loss to follow‐up may lead to bias through exaggerated effects in favour of one of the groups. This can affect the generalisability and internal validity of the trial and the results (Fewtrell 2008; Schulz 2002).

Missing data from loss to follow‐up can be dealt with statistically by various methods including, for example, imputing values based on valid assumptions about the missing data to give a conservative estimate of the treatment effect. However, the risk of bias still remains when trials do not collect adequate data to give accurate estimates (Hollis 1999). Schulz and colleagues suggested that less than 5% loss to follow‐up may lead to minimum bias, while 20% loss to follow‐up can threaten trial validity, although the pattern of loss to follow‐up by treatment may also be an important factor (Schulz 2002). Loss to follow‐up from randomised trials can sometimes go unreported and using different, but plausible, assumptions about outcomes for participants lost to follow‐up can change the results of randomised trials.

A number of trials have retrospectively examined the predictors of loss to follow‐up in different disease areas (Arnow 2007; Snow 2007; Villarruel 2006). In a trial for the treatment of chronic major depression, Arnow examined the predictors of time to, and reason for, dropout of participants (Arnow 2007). Ethnic minorities and participants with comorbid anxiety were more likely to drop out. In a randomised trial of a human immunodeficiency virus (HIV) prevention intervention for Latino youths, English speakers were more likely to attend follow‐up (Villarruel 2006). Snow examined the predictors of clinic attendance and dropout at the 11‐year follow‐up of the Lung Health study (Snow 2007). Age, gender, number of cigarettes smoked per day, marital status and whether participant's children smoked were predictors of clinic attendance. These analyses showed that attendance for follow‐up can be trial and disease specific. An awareness of these factors can help trialists decide which strategies to adopt to improve retention in their randomised trial.

Description of the methods being investigated

Strategies to improve trial retention include those designed to generate maximum data return or compliance to follow‐up procedures. These can include frequency and timing of follow‐up visits (follow‐up shortly after randomisation versus long‐term follow‐up), nature of the outcome to be measured (survey based self reported outcomes versus morbidity or mortality reporting), target of the intervention (participants versus providers versus trial sites), and type of intervention (incentives versus communication strategies versus participant case management).

How these methods might work

These retention strategies are designed to motivate participants (Leathem 2009), or the trial site to continue participating in a trial once they have been recruited and randomised. Some strategies are designed to encourage participants to identify with the trial and to promote a sense of value and belonging, for example, using trial identity cards. Other strategies are designed to keep participants engaged in the trial, for example, by sending participant newsletters. To encourage a proactive approach to trial retention, strategies can be designed to target participants directly through letters, emails, telephone calls or to target them via the clinicians involved in participant follow‐up, for example, through regular communication with trial sites. Strategies have been specifically developed to promote retention in areas of research where it is particularly challenging, such as mental health (Furimsky 2008; Loue 2008), weight loss ( Couper 2007; Goldberg 2005), rare diseases (McKinstry 2007), substance abuse (El Khorazaty 2007), research involving minority ethnic groups (Eakin 2007; Loftin 2005; Villacorta 2007), and vulnerable groups such as older people (Burns 2008) or people with HIV (Anastasi 2005).

Why it is important to do this review

As drop‐out or incomplete data causes problems in the conduct, analysis and interpretation of randomised trials, it is important to identify retention strategies that minimise this loss as far as possible.

Davis and colleagues conducted a review of community‐based trials published from 1990 to 1999 and described retention strategies and retention outcomes for this area (Davis 2002). Robinson and colleagues conducted a systematic review of strategies for retaining study participants (Robinson 2007). While both reviews identified studies providing data on retention rates from primary studies and strategies used to promote retention, these were not evaluated quantitatively in either review.

A systematic review of strategies to retain participants in population‐based cohort studies found that providing incentives was consistently associated with retention in these studies and that response generally increased with increasing incentive value (Booker 2011). Reminder letters, repeat questionnaires and reminder calls also increased response rates. Furthermore, the Edwards et al. Cochrane methodology review on methods to increase response rates to postal and electronic questionnaires found that including monetary incentives, keeping the questionnaire short and contacting people before sending the questionnaire were ways to increase response rates (Edwards 2009). That review was not restricted to research exclusively within randomised trials and covered both healthcare and non‐healthcare settings and it is difficult to know which of these strategies would be applicable to randomised trials in health care. Reasons for drop‐out in cohort studies and surveys may differ from those in randomised trials. For example, in trials, participants may be randomised to a study group that is not their preferred choice and factors around randomisation and the type of intervention mean that strategies increasing retention in cohort studies and surveys cannot necessarily be extrapolated to randomised trials.

The challenges of boosting recruitment to randomised trials is often described alongside retention in the literature. Some similar strategies may be used in an attempt to both increase recruitment and improve retention, such as giving incentives together with extra information. Rendell et al. assessed the evidence for the effect of disincentives and incentives on the extent to which clinicians invite eligible people to participate in randomised trials of healthcare interventions (Rendell 2007). No randomised trials of interventions were identified and the authors concluded that some aspects of the conduct of the trial might affect a clinician's willingness to invite people to participate, for example, the way the clinician is invited to take part and the availability of support staff. In another Cochrane methodology review, Treweek et al. assessed strategies to improve recruitment to research studies (Treweek 2010), but recruitment to trials presents different challenges to participant engagement and follow‐up. For example, strategies to market a trial and win over participants during the recruitment phase may be different to strategies to keep participants engaged in a trial (Francis 2007).

Many untested strategies are used by researchers to try to improve retention in randomised trials. Therefore, because loss to follow‐up can compromise the validity of a trial's findings, delay results and, in some circumstances, increase the costs of the research, a systematic review is needed to assess the effect of strategies to improve retention in randomised trials.

Objectives

To quantify the effect of strategies to improve retention in randomised trials.

To investigate if the effect varies by the type of strategy, trial setting and healthcare area.

Methods

Criteria for considering studies for this review

Types of studies

We included completed randomised trials that compared strategies to increase retention embedded in host randomised trials (hereafter referred to as retention trials). The retention trials were embedded in real trials (host trials) and not hypothetical trials. The retention trials included at least one randomised comparison of two or more strategies to improve retention, or compared one or more strategies with no strategy. In anticipation of few trials, we included retention trials if they were randomised or quasi‐randomised (e.g. had used alternation, date of birth or case record number as a method of allocating participants) (Lefebvre 2008).

Strategies to improve retention were designed for impact after participants were recruited and randomised to either the intervention or control group of the main and the retention trial. We included trials to increase response to postal and electronic questionnaire. We excluded trials to increase recruitment only. We excluded cohort studies with embedded randomised retention trials, which were the subject of a separate systematic review (Booker 2011).

Types of data

We included randomised and quasi‐randomised retention trials within the context of a host randomised trial with participants from any age, gender, ethnic, cultural, language and geographic groups. We included unpublished and published participant retention data from randomised trials addressing healthcare (including all disciplines and disease areas) and non‐healthcare (education, social sciences) topics. We also included trials set in the community that were healthcare related.

Types of methods

We considered any strategy aimed at increasing retention, directed towards the clinician, researcher or participant. We included strategies compared with each other or with usual study procedures. We also included trials with any combination of strategies to increase retention. Strategies could be participant or trial management focused and include any of the following:

  • strategies to motivate participants and clinicians (e.g. incentives or gifts);

  • strategies to improve communication with participants or trial sites (e.g. enhanced letters);

  • methodology strategies (e.g. shorter length of follow‐up or variation in follow‐up visit frequency);

  • strategies to improve social support for participant retention.

Types of outcome measures

Primary outcomes

We used retention (the proportion of participants retained) at the primary analysis point as defined in each individual retention trial as the primary outcome because it is easier to interpret than attrition/loss to follow‐up (i.e. the proportion lost or not retained). In cases where the time point for measurement of the primary outcome was not predefined, we took the first time point reported for analysis. In most cases, this was final response. If retention at a number of time points was reported and no clear time point for the primary outcome for the retention trial was stated, we took data for the nearest time point to the intervention in the retention trial analyses.

Secondary outcomes

Retention of participants at secondary analysis points.

Search methods for identification of studies

We designed a search strategy to identify published and unpublished randomised and quasi‐randomised trials that assessed strategies to improve retention in randomised trials in healthcare, education and social science settings. We searched bibliographic databases for published trials and trial registers for trials that had not been fully published, or were unpublished or ongoing. We applied no language restrictions.

Electronic searches

Each search comprised an established filter to identify randomised trials plus free‐text terms and database subject headings relating to reducing loss to follow‐up or increasing retention (Appendix 1). Electronic databases searched included:

  • the Cochrane Central Register of Controlled Trials (CENTRAL) (to May 2012);

  • PreMEDLINE (to April 2010);

  • MEDLINE (1950 to May 2012) (Appendix 2), EMBASE (1980 to May 2012) (Appendix 3) and PsycINFO (1806 to May 2012) (Appendix 4), searched using an Ovid platform;

  • Database of Abstracts of Reviews of Effects (DARE, in The Cochrane Library May 2012);

  • CINAHL (Cumulative Index to Nursing and Allied Health; 1981 to May 2012) (Appendix 5), using the EBSCOHost platform;

  • Campbell Collaboration's Social, Psychological, Educational and Criminological Trials Register (C2‐SPECTR http://geb9101.gse.upenn.edu/: searched May 2009 (website no longer accessible)) (Appendix 6);

  • Education Resource Information Centre (ERIC) 1966 to May 2009) (Appendix 7), using Dialog Datastar.

Searching other resources

We handsearched the reference lists of relevant publications and reviews to identify further trial reports (Horsley 2011) (Appendix 8). We also searched the abstracts of Society for Clinical Trials (SCT) meetings from 1980 to 2012, the Current Controlled Trials metaRegister of Controlled Trials (mRCT) (www.controlled‐trials.com/mrct), the Cochrane Methodology Register (in The Cochrane Library to April 2012) and the World Health Organization (WHO) trials platform (apps.who.int/trialsearch). We conducted a survey of Clinical Trial Units in the UK to identify further eligible trials not identified through other sources and the review was presented at the Society for Clinical Trials 31st Conference in Baltimore, USA in May 2010 and advertised on the Conference notice board with the aim of identifying potentially eligible trials from outside the UK.

Data collection and analysis

Selection of studies

One review author (VB) selected potentially eligible trials from the titles and abstracts retrieved by the searches, using a predesigned study eligibility screening form. We were over inclusive when screening, 0.7% (168/24,304) of records identified were sent for screening to a second review author (GR), which is 23% (168/735) of all potentially eligible records identified. We obtained full‐text papers and two review authors (VB, GR) reviewed potentially eligible trials for inclusion. We contacted study authors for electronic copies of papers that we could not access through library sources. We were able to obtain copies of all the potentially eligible papers that we wanted to screen. We resolved disagreements by discussion with a third review author (SS). When necessary, we sought information from the original investigators for potentially eligible trials where we wished to clarify eligibility.

Data extraction and management

One review author (VB) extracted data from eligible retention trial and associated host trial papers and a second review author (JT) checked the entries. We reached consensus on any disparities by discussion with a third review author (SS). Data extracted for the host trial were aim, setting, disease area, comparators, primary outcome, sample size calculation, inclusion exclusion criteria, sequence generation and allocation concealment, and numbers randomised to each group. For the embedded retention trial, we extracted data for onset in relation to the host trial, source of the sample, aim, primary outcome and type of follow‐up. The retention strategy details included type, frequency and timing of administration method of randomisation, numbers randomised, included and retained at primary analysis, and data required for the risk of bias assessment.

Assessment of risk of bias in included studies

To assess the validity of each retention trial we judged them against the four domains of the Cochrane 'Risk of bias' tool (Higgins 2008a). To assess selection bias, we recorded how the allocation sequence was generated at study level and the methods used to conceal the allocation. We assessed performance bias by recording methods used to blind participants if considered appropriate to do so. For some interventions, participants could not be blinded to the intervention (e.g. where vouchers, cash or gifts were administered). However, in these cases, study personnel could be blinded to the allocation if administration of the intervention was carried out by someone unaware of the allocation.

As retention is the subject of our review, and retention of participants is the primary outcome, attrition from the trials does not constitute a bias and has not been included in the 'Risk of bias' tables. We assessed each included retention trial for selective outcome reporting by recording the primary outcome for the trial and the outcomes for which results were reported. A judgement was made about each trial for each risk of bias domain assessed. For completed host trials (within which retention trials were embedded), we only assessed sequence generation and allocation concealment, in order to ensure the host trial was randomised.

Measures of the effect of the methods

We calculated risk ratios (RR) and their 95% confidence intervals (CI) for retention to determine the effect of strategies on this outcome.

Unit of analysis issues

For retention trials that randomised individuals and clusters, the unit of analysis was the participant. For cluster randomised trials that ignored clustering in the analysis, we inflated the standard errors (SE) to avoid overprecise estimates of effect as follows (Higgins 2008b).

  1. We calculated the RR, 95% CI and SE based on participants in the usual way (i.e. ignoring clustering).

  2. This standard error was then inflated using the design effect to get an adjusted SE: adjusted SE = SE X√ design effect. With the design effect calculated as follows: design effect = 1 + (M ‐ 1) ICC where M = mean cluster size, ICC = the intracluster correlation coefficient.

  3. Where published ICCs were not available, we used the mean ICC from appropriate external estimates for Land 2007. This was the mean of estimates for the return of EuroQol questionnaires (ICC = 0.054) from a source recommended by the Cochrane Handbook for Systematic Reviews of Interventions (Section 16.3.4) (Higgins 2008b) and www.abdn.ac.uk/hsru/documents/iccs‐web.xls (last accessed 27 September 2013).

  4. We entered the effect estimate and the new updated SE into Review Manager 5 using the generic inverse variance (RevMan 2012).

Where the number of participants randomised was not clearly stated in the included study report, we contacted the study authors for this information.

Dealing with missing data

We contacted study authors for data for the risk of bias assessment, numbers randomised to each group and numbers retained in each group at the primary endpoint. We described outcomes with insufficient data qualitatively. For time‐to‐event outcomes, we used the time point of the host study primary outcome, taking account of censoring if necessary and if the data were available.

Assessment of heterogeneity

We measured heterogeneity of the intervention effect using the Chi2 statistic at a significance level of 0.10 and the I2 statistic (Higgins 2003), and explored through subgroup analyses.

Assessment of reporting biases

We would have investigated reporting bias using tests for funnel plot asymmetry if sufficient data had been available (Egger 1997; Sterne 2008).

Data synthesis

If there was no substantial heterogeneity, we pooled RRs using the fixed‐effect model. If heterogeneity was detected and could not be explained by subgroup or sensitivity analyses, we used the random‐effects model or did not pool results.

For factorial trials (Sharp 2006a‐h, Kenton 2007a‐d), all main effects were included as separate trial comparisons if they addressed different categories of strategies. Where the main effects addressed two or more strategies within the same category (e.g. Bowen 2000abc), we combined the relevant intervention groups and compared them with the control group. We also compared each intervention group with the control group, as separate trial comparisons, in exploratory analyses. For one 2 x 2 x 2 x 2 factorial trial (Renfroe 2002a‐d), the numbers randomised for each group were not available at the time of analysis so comparison groups were collapsed as far as possible and then treated as separate trial comparisons in the appropriate analyses. For two three‐armed trials that compared two similar intervention groups with one control group, we combined the intervention groups and compared it with the control group for the main analyses (Bauer 2004ab, Khadjesari 2011 1abc). We also compared each intervention group as separate trial comparisons in exploratory analyses.

These approaches allowed full exploration of the data and also avoided double counting and over‐precise pooled estimates of effect in our main analyses. However, this also meant that there were occasionally a greater number of trial comparisons than trials.

Computations for the absolute benefits of effective strategies on questionnaire response and trial retention were based on absolute risk reductions derived from meta‐analysis RRs (Cochrane Handbook for Systematic Reviews of Interventions, Section 12.5.4.2: Schünemann 2008).

Subgroup analysis and investigation of heterogeneity

To explore the effect of different strategies on trial retention, we planned the following subgroup analyses by the type of strategy used in included retention trials.

  • Whether the strategy was compared with usual follow‐up or other strategies.

  • Whether in healthcare or non‐healthcare settings.

  • Whether assessment of retention was immediate or longer term (e.g. if a response to a questionnaire was expected immediately or at later time points).

  • Whether the strategy was participant or management focused.

However, we identified such a diversity of retention trials and interventions that these analyses were inappropriate or not possible. Therefore, different types of strategies were analysed separately and new subgroups were defined within these before we conducted the analyses.

(a) Incentives

We subgrouped retention trials or trial comparisons evaluating the addition of an incentive strategy versus none as follows for analysis.

  1. Monetary incentives given upfront, defined as money given to the trial participant prior to data collection in cheque, cash or voucher format.

  2. Non‐monetary incentives, defined as gifts, for example, pens or certificates.

  3. Offers of monetary incentives after data collection, defined as a promise of the incentive after return of outcome data through attendance for scheduled follow‐up or receipt of follow‐up questionnaires.

  4. Offers of non‐monetary incentives defined as a promise of the non‐monetary incentive after return of outcome data through attendance for scheduled follow‐up or receipt of follow‐up questionnaires.

We subgrouped retention trials or trial comparisons comparing different values of monetary incentives into:

  1. those offering incentives;

  2. those both giving and offering an incentive for any subsequent data (e.g. sending GBP5 with a questionnaire with an offer of GBP5 if the questionnaire is returned).

We analysed retention trials evaluating the addition of a monetary incentive versus either an offer of a monetary incentive or follow‐up by telephone separately.

(b) Communication

We grouped retention trials or trial comparisons of the effect of different communication strategies into letter, post and reminder strategies for analysis as follows.

  1. Enhanced versus standard cover letter.

  2. Total design method versus standard postal communication strategy.

  3. Priority versus regular post.

  4. Additional reminders versus usual reminders to trial sites.

  5. Additional reminders versus usual follow‐up to trial participants.

  6. Early versus late administration of questionnaire (i.e. sending questionnaires two to three weeks after a follow‐up visit versus one to four months after a follow‐up visit).

  7. Recorded delivery versus telephone reminder.

(c) Questionnaire structure

We subgrouped trials of questionnaire strategies into length of questionnaire, clarity of meaning, order of questions and layout as follows.

  1. Short versus long questionnaire.

  2. Long and clear questionnaire versus short and condensed questionnaire.

  3. Medical condition questions first versus generic questions first.

  4. Relevance of questionnaires: alcohol versus mental health questionnaires.

There were no subgroups for behavioural, case management and methodology retention trials.

Our analyses focused on the primary endpoint of retention. We initially pooled retention trials within subgroups using the fixed‐effect model and quantified heterogeneity. We assessed whether these subgroups had a differential impact on retention using the test for interaction. We did not pool trials if results were inconsistent or heterogeneity was excessive.

Sensitivity analysis

To assess the robustness of the results we planned sensitivity analyses that excluded quasi‐randomised retention trials.

Results

Description of studies

The studies are described in the Characteristics of included studies, Characteristics of studies awaiting classification, and Characteristics of excluded studies tables.

Results of the search

We identified 24,304 abstracts, titles and other records from database searches to May 2012, handsearches of reviews, lists of references in included papers, SCT conference abstracts (to 2012), personal contact with trialists, and the survey of UK Clinical Trials Units (Figure 1). We screened 735 full‐text papers, reports and manuscripts for eligible studies. Of 68 potentially eligible studies, we found 30 to be subsequently ineligible. This left 38 retention trials for inclusion in the review. The retention trials were embedded in real trials (host trials). We identified 11retention trials from CENTRAL, MEDLINE and CINAHL; 14 from handsearching reviews, conference abstracts, and references lists of eligible papers; and 13 through personal communications or correspondence with clinical trials units. We evaluated six broad types of strategy to improve retention in randomised trials. Most strategies were targeted at increasing questionnaire response. The strategies used for this were incentives, communication, methodology and questionnaire design strategies. There was minimal evidence for the use of behavioural and case management strategies to improve retention.


Attrition study flow diagram.

Attrition study flow diagram.

Included studies

Of the 38 eligible retention trials, 28 were published in full, one as an abstract (Kenton 2007a‐d), and one as part of a PhD thesis (Nakash 2007). Four retention trial publications contained two trials each (Khadjesari 2011; McCambridge 2011; McColl 2003; Severi 2011). Eight retention trials are unpublished as of June 2013 (Bailey 1; Bailey 2; Edwards 2001; Land 2007; Letley 2000; MacLennan; Marson 2007; Svoboda 2001).

Host trials

Twenty‐two host trials included a single retention trial (AVID investigators 1997; Boyd 2002; Chaffin 2009; Cooke 2009;Cox 2008; Gail 1992; Dennis 2009; Hughes 1984; International Stroke Trial Group 1997; Kenyon 2001; Lamb 2007; Leigh Brown 2001; Marson 2007 (2); Omenn 2006; Porterhouse 2005; Rothert 2006; Tai 1999;Tilbrook 2011; TOMBOLA 2009a; TOMBOLA 2009b; UK BEAM 2004). Two host trials from this group were unpublished (for the retention trials by Ashby 2011 and Land 2007).

The other host trials included multiple retention trials (one unpublished for the retention trials by Bailey 1 and Bailey 2). Two retention trials (Ford 2006; Subar 2001) were embedded in the US‐based Prostate, Lung, Colorectal, Ovarian (PLCO) screening trial of Prorok 2000; two (Avenell 2004; MacLennan) in the RECORD fracture prevention trial (RECORD 2007); two (Edwards 2001; Svoboda 2001) in the CRASH trial (CRASH Trial collaborators 2004); four (Khadjesari 2011 1abc; Khadjesari 2011 2; McCambridge 2011 1; McCambridge 2011 2) in the Down your Drink Trial (Murray 2007); two (Bailey 1; Bailey 2) in a feasibility study for the Sex unzipped website (unpublished); two (Severi 2011 1; Severi 2011 2) in the Text to Stop smoking cessation trial (Free 2011); and two (McColl 2003 1; McColl 2003 2) in the COGENT trial (Eccles 2002).

Participants and settings

Included retention trials were conducted in a broad spectrum of clinical conditions and geographical settings (see Appendix 9). Eight included retention trials were embedded in trials for the treatment of alcohol and smoking dependency (Bauer 2004ab; Hughes 1989; Khadjesari 2011 1abc; Khadjesari 2011 2; McCambridge 2011 1; McCambridge 2011 2; Severi 2011 1; Severi 2011 2), and four in trials investigating treatments for injuries (Edwards 2001; Gates 2009; Nakash 2007; Svoboda 2001). Six retention trials were set in treatment trials for cancer, cardiovascular disease, epilepsy and back pain (Dorman 1997; Land 2007; Letley 2000; Man 2011; Marson 2007; Renfroe 2002a‐d), and four were embedded in screening trials for cancer, postnatal depression, and elderly diseases (Ford 2006; Kenton 2007a‐d; Sharp 2006a‐h; Subar 2001). Seven retention trials were embedded in prevention trials, which included two cancer prevention trials for lung and breast cancer (Bowen 2000abc; Sutherland 1996), one migraine prevention trial (Ashby 2011), and three fracture prevention trials (Avenell 2004; Cockayne 2005; MacLennan). Four retention trials were conducted in clinical management trials for orthopaedics, asthma, diabetes and angina (Leigh Brown 1997; McColl 2003 1; McColl 2003 2; Tai 1997). Six retention trials were conducted in other areas: exercise (Cox 2008), parenting (Chaffin 2009), weight management (Couper 2007), neonatal medicine (Kenyon 2005), and sexual health promotion (Bailey 1; Bailey 2).

Twenty‐five retention trials were UK based, nine were USA based and two were set in Canada. The remainder were set in Czech Republic and Australia (see Characteristics of included studies table).

Retention trials were embedded in host trials that recruited participants from different settings. Five trials recruited participants directly from the community. Sixteen trials were conducted through secondary care facilities. One trial recruited participants through a combination of state workers compensation programmes, occupational and physician clinic, a surveillance programme and union records. Six UK trials recruited solely through general practitioner (GP) practices and two used a combination of recruitment through GP practices and the media. Seven trials recruited participants via the Internet, six of these were UK based and the other was US based. For one US‐based smoking cessation trial, it was unclear how participants were recruited (see Characteristics of included studies table).

Design of included retention trials

One trial was hosted in a clustered randomised trial and used this design to evaluate a strategy to improve retention (Land 2007). Four retention trials used different factorial designs (Bowen 2000abc; Kenton 2007a‐d; Renfroe 2002a‐d; Sharp 2006a‐h). There was also one three‐armed trial (Bauer 2004ab), and three four‐armed trials (Khadjesari 2011 1abc; McCambridge 2011 1; McCambridge 2011 2).

Five trials used quasi‐randomisation to allocate participants (Bowen 2000abc; Ford 2006; Gates 2009; McColl 2003 1; McColl 2003 2). Two used participant identification numbers (Ford 2006; Gates 2009), and two allocated the first half of a simple random sample of participants to receive one version of a questionnaire, while the remaining half was allocated to a second version (McColl 2003 1; McColl 2003 2). One retention trial used day of clinic visit to allocate participants (Bowen 2000abc).

All trials targeted individual trial participants, except one that targeted trial sites (Land 2007).

We recorded the timing of randomisation in the host trial versus the timing of randomisation in the retention trial. Four trials commenced during a randomised pilot study for the host trial (Khadjesari 2011 1abc; Letley 2000; McCambridge 2011 1; Sutherland 1996). One study started before the host trial (Chaffin 2009). Twenty‐nine trials commenced during follow‐up for the host trial (Ashby 2011; Avenell 2004; Bailey 1; Bailey 2; Bowen 2000abc; Cockayne 2005; Couper 2007; Cox 2008; Dorman 1997; Edwards 2001; Ford 2006; Gates 2009; Khadjesari 2011 2; Land 2007; Leigh Brown 1997; MacLennan; Man 2011; Marson 2007; McCambridge 2011 2; McColl 2003 1; McColl 2003 2; Nakash 2007; Renfroe 2002a‐d; Severi 2011 1; Severi 2011 2; Sharp 2006a‐h; Subar 2001; Svoboda 2001; Tai 1997). For one trial, it was unclear when the retention trial started in relation to the host trial (Kenton 2007a‐d). Three retention trials started after the host trial had finished (Bauer 2004ab; Hughes 1989; Kenyon 2005): Kenyon 2005 followed‐up seven‐year‐old children of mothers enrolled in the ORACLE trial (Kenyon 2001), Bauer 2004ab followed up participants in the COMMIT smoking cessation trial (Gail 1992), eight years after the original trial was completed and Hughes 1989 followed up participants in a smoking cessation trial six months after that study finished (Hughes 1984).

Strategies to improve retention

Retention in trials and response to questionnaires were the outcomes measured for all included trials. The included trials evaluated six different types of strategies to improve response or retention. Incentives, communication strategies, variation in questionnaire design, methodology strategies, and combinations of communication and incentive strategies evaluated improving response to postal and electronic questionnaires. Behavioural strategies, case management and some non‐monetary incentives were used to encourage participants to return to trial sites for follow‐up visits. Each type of strategy is described separately below.

Outcome measures in the included trials

Thirty‐four retention trials measured response to questionnaires. Among these, the questionnaires were by post in 26 trials, electronically in four and one was done by interview. For another three retention trials, response was return of biomedical kits or biomedical kits plus a questionnaire (see Characteristics of included studies table).

Four included trials measured the number of participants remaining in the trial (Bowen 2000abc; Chaffin 2009; Cox 2008; Ford 2006)

Ten included trials specified that their primary outcome was questionnaire response at a particular time point: McCambridge 2011 1 measured response at one and three months, McCambridge 2011 2 measured response at three and 12 months, and Khadjesari 2011 1abc and Khadjesari 2011 2 measured response within 40 days of the first reminder. For Severi 2011 1, the primary outcome was completed follow‐up at 30 weeks from randomisation, Severi 2011 2 used return of specimens one month after a telephone call, Avenell 2004 used retention at one year measured by questionnaire return but also reported retention at four and eight months. Cockayne 2005 and Sharp 2006a‐h had final follow‐up questionnaire response at any time as their primary outcome.

Two included trials reported questionnaire response at one time point only but without specifying that this was the primary outcome for the trial (Edwards 2001; Svoboda 2001). These trials measured response at three months from the questionnaire being sent. One trial reported trial retention at one time point only (three years) but without specifying that this was the primary outcome for the trial (Ford 2006). This was measured as completing the next cancer screening in a cancer screening trial. In each of these three trials, we used these data for analyses.

Two trials recorded questionnaire response at two time points without stating which was the primary outcome (Dorman 1997; Gates 2009). One trial recorded retention at two time points without stating which was the primary outcome (Cox 2008). We used data for response/retention after the first contact with respondents as the primary outcome for analyses. One trial reported response at three time points (4 weeks, 12 weeks and 9 months), which were all stated as the primary outcome (Nakash 2007). We used the data for week four in our main analysis.

Five trials reported data in survival curves. For these, we used the final analysis point (Ashby 2011; Bowen 2000abc; Chaffin 2009; Land 2007; Sutherland 1996). Authors confirmed data when it had been extracted. Fifteen trials reported the number of questionnaires returned with no time point specified (Bauer 2004ab; Couper 2007; Hughes 1989; Kenton 2007a‐d; Kenyon 2005; Leigh Brown 1997; Letley 2000; MacLennan; Man 2011; Marson 2007; McColl 2003 1; McColl 2003 2; Renfroe 2002a‐d; Subar 2001; Tai 1997).

Addition of incentive versus none

There were 14 retention trials of incentives and 19 trial comparisons (Table 1). Thirteen trials were aimed at improving questionnaire response in trials and one trial was aimed at improving return for follow‐up at trial site (Bowen 2000abc). The different incentive strategies aimed at improving questionnaire response were vouchers, cash, a charity donation, entry to prize draws, cheques, a certificate of appreciation and offers of study results. Incentive strategies aimed at improving retention were: certificates of appreciation and lapel pins. The value of incentives used in UK evaluations ranged from GBP5 to GBP20 and were in cash, cheque or voucher format. The value of incentives used in US‐based studies was USD2 to USD10. For offers of entries into prize draws, the values were higher, ranging from GBP25 to GBP250 for UK prize draws and USD50 for US‐based prize draws. One trial evaluated giving a monetary incentive with a promise of a further incentive for return of trial data (Bailey 2).

Open in table viewer
Table 1. Trials evaluating incentive strategies

Trial or trial comparison

Incentive groups

Control group

Outcome type

Addition of incentive vs. none

Bauer 2004ab

a) USD10 cheque

b) USD2 cheque

Arms combined for main analysis

No incentive

DNA specimen kit return plus postal questionnaire response 

Gates 2009

GBP5 voucher

No incentive

Postal questionnaire response

Kenyon 2005

GBP5 voucher

No incentive

Postal questionnaire response

Khadjesari 2011 1ac

a) Offer GBP5 voucher

c) Offer of entry into GBP250 prize draw, groups combined for main analysis

No incentive

Internet‐based questionnaire response

Khadjesari 2011 2

Offer of GBP10 Amazon.co.uk voucher

No incentive

Internet‐based questionnaire response

Bowen 2000abc

a) Certificate

b) Pin

c) Pin and certificate groups combined for main analysis

No incentive

Participants retention

Renfroe 2002a 

Certificate of appreciation

No certificate of appreciation

Postal questionnaire response

Sharp 2006a

Pen

No pen

Postal questionnaire response

Sharp 2006b

Pen

No pen

Postal questionnaire response

Sharp 2006c

Pen

No pen

Postal questionnaire response

Sharp 2006d

Pen

No pen

Postal questionnaire response

Cockayne 2005

Offer of study results

No offer

Postal questionnaire response

Hughes 1989

Offer of free reprint of results

No offer

Postal questionnaire response

Khadjesari 2011 1b

Offer of GBP5 charity donation

No offer

Internet‐based questionnaire response

Addition of monetary incentive to both groups

Bailey 1 unpublished

Offer of GBP20 shopping voucher

Offer of GBP10 shopping voucher

Postal questionnaire response

Bailey 2 unpublished

Shopping voucher: GBP10 in advance and GBP10 on data return

Shopping voucher: GBP5 in advance and GBP5 on data return

Questionnaire response and chlamydia kit return

Addition of monetary incentive vs. offer of incentive

Kenton 2007a

USD2 coin

Draw for USD50 gift voucher

Postal questionnaire response

Kenton 2007b

USD2 coin

Draw for USD50 gift voucher

Postal questionnaire response

Offer of prize draw vs. no offer

Leigh Brown 1997

Aware of monthly prize draw of GBP25 gift voucher

No offer of draw

Postal questionnaire response

Communication strategies

There were 14 retention trials of communication strategies to improve response to postal questionnaires or return of biomedical test kits, or both, in randomised trials. There were 20 trial comparisons (Table 2). Strategies evaluated were: enhanced letters, additional reminders to participants, priority mailing of questionnaires, time of questionnaire administration, telephone contact and reminders to trial sites of upcoming assessments. One trial used a combination of postal communication strategies known as the total design method (TDM) (Sutherland 1996). This included sending letters in a white envelope with a hospital logo and commemorative stamp, a hand‐signed letter on headed notepaper, with a reply self addressed stamped envelope, enclosing the contents. Follow‐up was with a postcard sent after seven days followed by two reminder letters. This was compared with a customary method for postal follow‐up. One trial evaluated the addition of an electronic SMS (short message service) text reminder on the day participants were due to receive their postal questionnaire (Man 2011).

Open in table viewer
Table 2. Trials evaluating communication strategies

Trial or trial comparison

Communication strategy

Control arm

Outcome type

Enhanced letter vs. standard letter

Renfroe 2002c

Cover letter signed by physician

Cover letter signed by co‐ordinator

Postal questionnaire response

Marson 2007

Letter explaining the approximate length of time to complete questionnaire

Standard letter

Postal questionnaire response

Total design method vs. customary method

Sutherland 1996

Total design method for postal follow‐up

Standard method for postal follow‐up

Postal questionnaire response

Priority vs. regular post

Renfroe 2002b

Express delivery

Standard delivery

Postal questionnaire response

Sharp 2006e

Despatch first‐class stamp

Despatch second‐class stamp

Postal questionnaire response

Sharp 2006f

Despatch first‐class stamp

Despatch second‐class stamp

Postal questionnaire response

Sharp 2006g

Second‐class return envelope

Free post return envelope

Postal questionnaire response

Sharp 2006h

Second‐class return envelope

Free post return envelope

Postal questionnaire response

Kenton 2007c

Priority mail

Standard mail

Postal questionnaire response

Kenton 2007d

Priority mail

Standard mail

Postal questionnaire response

Additional reminder vs. usual follow‐up

Ashby 2011

Electronic reminder 

No electronic reminder                          

Postal questionnaire response

MacLennan unpublished

Telephone reminder

No telephone reminder

Postal questionnaire response

Nakash 2007

Trial calendar given at recruitment with questionnaire due dates

No calendar

Postal questionnaire response

Severi 2011 1

Text message and fridge magnet both emphasising social benefits of study participation.

Text message reminder sent 3 days after questionnaire

Postal questionnaire response

Severi 2011 2

Telephone reminder from principle investigator

Standard procedures.

Return of cotinine samples

Man 2011

SMS text message as follow‐up questionnaire sent out

No SMS text message

Postal questionnaire response

Additional trial site reminder vs. usual reminder

Land 2007

Prospective monthly reminder of upcoming assessments to trial sites

No extra reminder to trial sites

Postal questionnaire response

Early vs. late administration of questionnaire

Renfroe 2002d

Questionnaire sent 2‐3 weeks after last AVID follow‐up visit

Questionnaire sent 1‐4 months after last AVID follow‐up visit

Postal questionnaire response

Recorded delivery vs. telephone reminder

Tai 1997

Recorded delivery reminder

Telephone reminder

Postal questionnaire response

Addition telephone follow‐up vs. incentive

Couper 2007

Telephone survey by trained interviewer

Postal questionnaire and USD5 bill

Post and questionnaire response

AVID: Antiarrhythmics Versus Implantable Defibrillators; SMS: short message service.

Five trials evaluated a combination of communication strategies and incentives to improve retention from randomised trials (Couper 2007; Kenton 2007a‐d; Renfroe 2002a‐d; Sharp 2006a‐h). The communication strategies were; first‐ and second‐class outward post (Kenton 2007a‐d; Renfroe 2002b; Sharp 2006a‐h), stamped and business reply envelopes (Sharp 2006a‐h), letters signed by different study personnel (Renfroe 2002c), letters posted at different times (Renfroe 2002d), text messages (Man 2011; Severi 2011 1), and a telephone survey (Couper 2007).

Questionnaire format

The effect of a change in questionnaire format on response to randomised trial questionnaires was evaluated in eight trials with 10 comparisons (Table 3). Formats evaluated were questionnaire length: short versus long (Dorman 1997; Edwards 2001; McCambridge 2011 1b; McCambridge 2011 2b; Svoboda 2001), long and clear versus short and condensed (Subar 2001), and the order of questions (Letley 2000; McColl 2003 1; McColl 2003 2).

Open in table viewer
Table 3. Trials evaluating new questionnaire strategies

Trial or trial comparison

Questionnaire strategy

Control arm

Outcome type

Short vs. long

Dorman 1997

Short EuroQol

Long SF‐36 questionnaire.

Postal questionnaire response

Edwards 2001 unpublished

1‐page, 7‐question functional dependence questionnaire 

3‐page, 16‐question functional dependence questionnaire

Postal questionnaire response

Svoboda 2001 unpublished

1‐page, 7‐question functional dependence questionnaire 

3‐page, 16‐question functional dependence questionnaire

Postal questionnaire response

McCambridge 2011 1b

AUDIT Short

+

LDQ

APQ

Internet‐based questionnaire response

McCambridge 2011 2b

AUDIT Short

+

LDQ

APQ

Internet‐based questionnaire response

Long and clear vs. short and condensed

Subar 2001

DHQ (36‐page food frequency questionnaire)

PLCO (16‐page food frequency questionnaire)

Postal questionnaire response and onsite completion  

 

Question order

McColl 2003 1

Asthma condition‐specific questions first followed by generic

 

Generic questions followed by condition specific

Postal questionnaire

response

McColl 2003 2

Angina condition‐specific questions followed by generic

Generic questions followed by condition specific

Postal questionnaire response

Letley 2000 unpublished

RDQ at front and SF‐36 at back

SF‐36 at front RDQ at back

Postal questionnaire response

Relevance of questionnaire

McCambridge 2011 1a

APQ23 items

CORE‐OM Mental health assessment 23/34 items

Internet‐based questionnaire response

McCambridge 2011 2a

AUDIT Short

+

LDQ

CORE‐OM Mental health assessment 10 items

Internet‐based questionnaire response

APQ: Alcohol Problems Questionnaire; AUDIT: Alcohol Use Disorders Identification Test; LDQ Leeds Dependency Questionnaire; PLCO: Prostate, Lung, Colorectal, Ovarian; SF‐36: Short Form 36 item.

Two further included trials evaluated the effect of the relevance of a questionnaire on response (McCambridge 2011 1a; McCambridge 2011 2a). Relevance was defined as assessing alcohol problems rather than mental health in the context of an Internet‐based intervention for hazardous drinkers (McCambridge 2011 1; McCambridge 2011 2).

Behavioural strategies

There were two trials of behavioural strategies used for retention in randomised trials (Chaffin 2009; Cox 2008). Cox 2008 compared motivational workshops versus information sheets. Chaffin 2009 compared self motivation orientation versus standard information in the context of a parenting programme. In this case, the retention trial was run prior to the host trial with the intention of improving retention in the subsequent parenting programme evaluation trial. The analysis was based on the number eligible for inclusion in the primary analyses for the subsequent parenting programme because we do not know the allocation of those who dropped out between first and second randomisations. Complete time‐to‐event data were not available for Chaffin 2009, but, as only two participants were censored in the analysis, this is unlikely to have biased the results.

Case management

One retention trial evaluated the effect of intensive case management procedures on retention of African American male participants in a cancer screening trial (Ford 2006).

Methodology strategies

One included trial used a trial design where people knew which treatment they received. The trial compared questionnaire response in an open versus blind trial (Avenell 2004).

Studies excluded from analyses

Two eligible trials could not be included in the analysis (Leigh Brown 1997; Letley 2000). Host trial participants in the retention trial by Leigh Brown 1997 were divided into two groups; one randomised, the other determined by preference of the referring primary care practitioner. The author confirmed that participants in the retention trial were from both randomised and non‐randomised groups of the host trial and that these could not be separated.

One recently completed, unpublished trial that is not included in the review examined the effect of newsletters on retention (Mitchell). This trial will be included in the review update.

Excluded studies

See Characteristics of excluded studies table.

We excluded trials because they were either part of a non‐randomised host study, or they were not a randomised retention trial, or the primary outcome was type of data item missingness. Other excluded trials were aimed at increasing treatment compliance or baseline questionnaire response. We contacted investigators to confirm aspects of eligibility.

Risk of bias in included studies

See Characteristics of included studies.

Allocation

All included retention trials reported that participants were randomly allocated to groups for comparison. Twenty‐four included trials described adequate sequence generation by a computerised random number generator, block randomisation or use of a table of random numbers table (Avenell 2004; Bailey 1; Bailey 2; Bowen 2000abc; Chaffin 2009; Cockayne 2005; Cox 2008; Hughes 1989; Kenyon 2005; Khadjesari 2011 1abc; Khadjesari 2011 2; Land 2007; Leigh Brown 1997; Letley 2000; MacLennan; Man 2011; Marson 2007; McCambridge 2011 1; McCambridge 2011 2; Nakash 2007; Renfroe 2002a‐d; Severi 2011 1; Severi 2011 2; Sutherland 1996). There was insufficient information about the sequence generation for 10 included trials, these were all described as randomised in the retention trial publications (Ashby 2011; Bauer 2004ab; Couper 2007; Dorman 1997; Edwards 2001; Kenton 2007a‐d; Sharp 2006a‐h; Subar 2001; Svoboda 2001; Tai 1997). Five included trials used quasi‐randomisation to allocate participants (Bowen 2000abc; Ford 2006; Gates 2009; McColl 2003 1; McColl 2003 2).

Several methods were used to avoid foreseen allocation of participants; sequence generation by a trial statistician and implemented by a trial manager; sequence generation by an independent researcher, a central randomisation service, or by a nurse using a preprogrammed computer; or allocation by sealed envelopes or sequentially numbered packs. Fifteen trials reported both adequate sequence generation and allocation concealment (Avenell 2004; Bailey 1; Bailey 2; Cockayne 2005; Cox 2008; Hughes 1989; Kenyon 2005; Khadjesari 2011 1abc; Khadjesari 2011 2; Letley 2000; MacLennan; Man 2011; McCambridge 2011 1; McCambridge 2011 2; Nakash 2007).

Blinding

Blinding of participants was generally not possible in included trials. For example, it is not possible to blind participants to the following strategies to increase trial retention or response to questionnaires: incentive or offer of incentive, behavioural (Cox 2008), or case management strategies (Ford 2006), different types of communication strategies, or questionnaire format strategies. In a number of trials, authors mentioned that participants were aware of the intervention they were getting but were unaware that this was being evaluated (Bowen 2000abc; Chaffin 2009; Kenton 2007a‐d; Kenyon 2005; Leigh Brown 1997; MacLennan; Marson 2007; McColl 2003 1; McColl 2003 2). For other trials, blinding of participants or study personnel to the outcome or intervention was not reported. For one trial, a judgement about blinding was not applicable because the study evaluated the effect of blind versus open trials on retention (Avenell 2004).

Incomplete outcome data

The primary outcome measure for this review was retention, and this was well reported. We contacted authors for clarification of any exclusions after randomisation if this was unclear from retention trial reports.

Selective reporting

Although retention trial protocols were not available for included trials, the included published and unpublished papers reported all expected outcomes for retention.

Other potential sources of bias

There were few other potential sources of bias identified from reports of included retention trials. For the behavioural trial by Cox 2008, the authors identified that the "walk and swim sessions were not separated according to the behavioural intervention. Participants were asked not to discuss written materials in the practical sessions". Therefore, potential contamination between study groups could have led to biased results.

Effect of methods

1. Incentive strategies

There were 14 trials of incentives giving 19 trial comparisons with 16,253 participants. There was considerable heterogeneity across incentive subgroups (P value < 0.00001) (Analysis 1.1), so we decided not to pool the results for incentives.

Addition of incentive

The three trials (3166 participants) that evaluated the effect of giving monetary incentives to participants showed that the addition of monetary incentives was more effective than no incentive at increasing response to postal questionnaires (RR 1.18; 95% CI 1.09 to 1.28, P value < 0.0001) (Analysis 1.1). A sensitivity analysis excluding the quasi‐randomised trial by Gates 2009 still showed that the addition of a monetary incentive remained more effective than none (RR 1.31; 95% CI 1.11 to 1.55, P value = 0.002) (Analysis 2.1).

Based on two Internet‐based trials (3613 participants), an offer of a monetary incentive promoted greater return of electronic questionnaires than no offer (RR 1.25; 95% CI 1.14 to 1.38, P value < 0.00001; heterogeneity P value = 0.14) (Analysis 1.1). However, a single trial comparison suggested that an offer of a monetary donation to charity did not increase response to electronic questionnaires (RR 1.02; 95% CI 0.78 to 1.32; P value = 0.90) (Analysis 1.1).

Based on six trials (6322 participants), there was no clear evidence that the addition of non‐monetary incentives improved questionnaire response (RR 1.00; 95% CI 0.98 to 1.02, P value = 0.91), but there was some heterogeneity (P value = 0.02) (Analysis 1.1). A sensitivity analysis excluding the quasi‐randomised trial (Bowen 2000abc) showed a similar effect (RR 1.00; 95% CI 0.93 to 1.08, P value = 0.99) (Analysis 2.1) and heterogeneity (P value = 0.01).

Two trials (1138 participants) evaluating offers of non‐monetary incentives suggest that an offer of a non‐monetary incentive is neither more nor less effective than no offer (RR 0.99; 95% CI 0.95 to 1.03, P value = 0.60) at improving questionnaire response (Analysis 1.1).

In exploratory analyses, the different incentive arms that were combined for the main analysis did not appear to show differential effects (Analysis 3.1).

Addition of monetary incentive to both study arms

Two trials (902 participants) show that higher value incentives are better at increasing response to postal questionnaires than lower value incentives (RR 1.12; 95% CI 1.04 to 1.22, P value = 0.005) irrespective of how they are given (Analysis 5.1).

Addition of monetary incentive versus offer of a monetary incentive

Two trials (297 participants) provided no evidence that giving a monetary incentive is better than an offer of entry into a prize draw for improving response to postal questionnaires (RR 1.04; 95% CI 0.91 to 1.19, P value = 0.56) Analysis 6.1.

Addition of an offer of entry into a prize draw versus none

We excluded one trial from the analysis (Leigh Brown 1997). The results showed higher responses in the group offered entry into a prize draw compared with the group not offered entry into the draw (70.5% versus 65.8%).

2. Communication strategies

There were 14 trials of communication strategies and 20 comparisons with 9822 participants.

Addition of telephone survey versus monetary incentive plus questionnaire

One trial (700 participants) showed no clear evidence that a telephone survey was either more or less effective than a monetary incentive and a questionnaire for improving response (RR 1.08; 95% CI 0.94 to 1.24, P value = 0.27) (Analysis 4.1).

Enhanced versus standard letters

Results from two trials (2479 participants) showed that an enhanced letter was neither more nor less effective than a standard letter for increasing response to trial postal questionnaires (RR 1.01; 95% CI 0.97 to 1.05, P value = 0.70) (Analysis 7.1).

Total design method versus customary method

Although based on a single trial (226 participants) the TDM package was more effective than a customary postal communication method at increasing questionnaire return (RR 1.43; 95% CI 1.22 to 1.67, P value < 0.0001) (Analysis 8.1).

Priority versus regular post

Based on the relevant arms of seven trials (1888 participants), there was no clear evidence that priority post was either more or less effective than regular post at increasing trial questionnaire return (RR 1.02; 95% CI 0.95 to 1.09, P value = 0.55) (Analysis 9.1).

Additional reminder versus usual follow‐up practices

Six trials (3401 participants) evaluated the effect of different additional types of reminders to participants on questionnaire response. There was no evidence that a reminder was either more or less effective than no reminder at improving trial questionnaire response (RR 1.03; 95% CI 0.99 to 1.06, P value = 0.13) (Analysis 10.1).

Additional reminder to trial site versus usual reminder

Based on one cluster randomised trial (272 participants), a monthly reminder to trial sites of upcoming assessment was neither more nor less effective than the usual follow‐up (RR 0.96; 95% CI 0.83 to 1.11, P value = 0.57) (Analysis 11.1).

Early versus late questionnaire administration

Based on one trial (664 participants), there was no clear evidence that sending questionnaires early either increased or decreased response (RR 1.10; 95% CI 0.96 to 1.26, P value = 0.19 (Analysis 12.1).

Recorded delivery versus telephone reminder

One small trial (192 participants) found that recorded delivery was more effective than a telephone reminder (RR 2.08; 95% CI 1.11 to 3.87; P value = 0.02) (Analysis 13.1).

3. New questionnaire strategies

New versus standard questionnaire

Eight trials with 10 comparisons (21,505 participants) evaluated the effect of a new questionnaire format on questionnaire response. Although there was some heterogeneity between the questionnaire subgroups (P value = 0.11) (Analysis 14.1), it did not seem reasonable to pool the results based on such different interventions.

Five trials (7277 participants) compared the effect of short questionnaires versus long on postal questionnaire response. There was only a suggestion that short questionnaires may be better (RR 1.04; 95% CI 1.00 to 1.08, P value = 0.07) (Analysis 14.1).

Based on one trial (900 participants; Subar 2001), there is no evidence that long and clear questionnaires were either more or less effective than shorter condensed questionnaires for increasing trial questionnaire response (RR 1.01; 95% CI 0.95 to 1.07, P value = 0.86) (Analysis 14.1).

Two trials (9435 participants; McColl 2003 1; McColl 2003 2) found no evidence that placing disease/condition questions before generic questions is either a more or less effective strategy than a generic questions before disease/condition questions strategy at increasing trial questionnaire response (RR 1.00; 95% CI 0.97 to 1.02, P value = 0.75) (Analysis 14.1). It should be noted that these were quasi‐randomised trials (Analysis 15.1).

One trial in this category was not included in the analysis by Letley 2000, outcome data were not available for each study arm when this review was submitted and the overall response rate for this trial was 87%.

In the context of research on reducing alcohol consumption, there was also evidence that more relevant questionnaires (i.e. those relating to alcohol use) increased response rates (RR 1.07; 95% CI 1.01 to 1.14, P value = 0.03).

4. Behavioural/motivational strategies

Two community‐based trials (273 participants; Chaffin 2009; Cox 2008) showed no evidence that the behavioural/motivational strategies used are either more or less effective than standard information for retaining trial participants (RR 1.08; 95% CI 0.93 to 1.24, P value = 0.31) (Analysis 16.1).

5. Case management

One trial (703 participants; Ford 2006) evaluated the effect of intensive case management procedures on retention. There is no evidence that intensive case management was either more or less effective than usual follow‐up in the population examined (RR 1.00; 95% CI 0.97 to 1.04, P value = 0.99) (Analysis 17.1).

6. Methodology strategies

One fracture prevention trial (538 participants; Avenell 2004) evaluated the effect of participants knowing their treatment allocation (open trial) compared with participants blind/unaware of their allocation on questionnaire response. Using a trial design where people know which treatment they will receive led to higher questionnaire response rates (RR 1.37; 95% CI 1.16 to 1.63, P value = 0.0003) (Analysis 18.1).

Reporting bias

Although we planned to investigate potential reporting bias, there were too few studies in most strategies to allow formal testing. However, we were able to obtain considerable data from unpublished trials and those published with limited information, reducing the risk of such biases.

Absolute benefits of strategies to improve retention

The absolute benefits of effective strategies on questionnaire response are illustrated in Table 4. The baseline response rates were broadly typical of the response rates seen in trials. The number of questionnaires returned were based on the assumed control arm risk.

Open in table viewer
Table 4. Gain in number of questionnaires returned per 1000 questionnaires sent

Example of proportion of questionnaires

returned in control arm 

 

 

30%

40%

50%

60%

70%

80%

90%

Strategy to improve retention 

RR

1

RR

 

 

 

 

 

 

 

Addition of monetary incentive versus none

1.18

0.847

107

92

76

61

5

3

2

Addition of offer of monetary incentive/prize draw versus none

1.25

0.800

140

120

100

80

60

40

20

Addition of higher value monetary incentive versus addition of lower amount

1.12

0.890

77

66

55

44

33

22

11

RR: risk ratio.

Based on a 40% baseline response rate for postal questionnaires, the addition of a monetary incentive was estimated to increase response by 92 questionnaires per 1000 sent (95% CI 50 to 131).

With the addition of an offer of a monetary incentive in an Internet‐based trial, based on a baseline response rate of 30%, trialists could expect an increase of 140 questionnaires per 1000 (95% CI 86 to 193).

For trials hoping to increase the return of postal questionnaires with chlamydia test kits, the number of kits returned was estimated to increase by 33 per 1000 sent when GBP20 was offered as an incentive, rather than GBP10 (95% CI 11 to 54).

Discussion

Summary of main results

Thirty‐eight randomised retention trials were included in this review, evaluating six broad types of strategies to increase questionnaire response and retention in randomised trials. In 34 trials, strategies for increasing response to questionnaires were: incentives, communication strategies, new questionnaire format and methodological interventions. Four trials evaluated strategies to improve retention, these were: participant case management, behavioural and non‐monetary incentive strategies. Trials were conducted across a spectrum of disease areas, countries, healthcare and community settings.

Strategies with the clearest impact on questionnaire response were: addition of monetary incentives compared with no incentive for return of postal questionnaires, addition of an offer of a monetary incentive when compared with none for return of electronic questionnaires, and an offer of GBP20 vouchers when compared with GBP10 for return of postal questionnaires and biomedical test kits. The evidence was less clear about whether shorter questionnaires rather than longer questionnaires increased response. The evidence was also less clear whether in the context of research on reducing alcohol consumption more relevant questionnaires increased response.

The addition of a non‐monetary incentive, an offer of a non‐monetary incentive compared with no incentive or, 'enhanced' letters, letters delivered by priority post, or additional reminders compared with standard communication strategies did not increase or decrease trial questionnaire response. Questionnaire structure also did not seem to increase response.

Although each was based on the results of a single trial, recorded delivery (proof of posting and an electronic copy of the signature available online) of questionnaires seemed more effective than telephone reminders, and a 'package' of postal communication strategies with reminder letters appeared better than standard procedures. A trial design where participants knew which treatment they were to receive also appeared more effective than a trial design where they were unaware of the treatment they were about to receive for return of questionnaires in a fracture prevention trial. Further evaluation of these strategies may be needed. Posting questionnaires early, questionnaire order, offers of charity donations or sending reminders to trial sites did not improve response.

Many trial outcome measures were collected using questionnaires, therefore, if response rates can be increased, retention will also be improved. No strategy had a clear impact on increasing the number of participants returning to trial sites for follow‐up visits.

Overall completeness and applicability of evidence

The addition of a GBP5 voucher to usual follow‐up procedures was effective for return of postal questionnaires in trials conducted between 2005 and 2009. The more recent unpublished studies by Bailey 1Bailey 2 found GBP20 vouchers were more effective than GBP10 vouchers for return of postal questionnaires. Splitting the monetary incentive into money given before and after receipt of data could be more effective as a strategy to increase questionnaire follow‐up with different population groups and in different trial settings where questionnaire response is low (e.g. with hard to reach groups that may include young male healthy adults, teenagers or residents in areas of high economic deprivation). This could be a cost‐effective strategy because if questionnaires are not returned then money is saved. The value of the monetary incentive should not be so high as to be perceived as payment for data but more as an appreciation for efforts made by participants. Offering monetary incentives may increase the number of questionnaires returned per 1000 participants by at least as much as giving monetary incentives and giving higher valued monetary incentives, but has only been tested in online questionnaires. Offers of monetary incentives were also an effective strategy in the context of an online electronic questionnaire. These could be less costly to increase retention than the addition of a monetary incentive as only those who return the data are reimbursed. This would need further evaluation as the results were based on two Internet‐based trials. It would be beneficial for trialists to know which is more effective: an offer of a monetary incentive or an upfront monetary incentive. We did not find any trials that made this direct comparison.

Shorter postal questionnaires have wide applicability to trials and could be considered as a useful strategy to increase trial questionnaire response in online Internet‐based trials but there is only a suggestion that these are effective.

Several strategies showed no clear effect. The addition of non‐monetary incentives in the form of pens, lapel pins and certificates of appreciation, or offers of non‐monetary incentives through offering study results did not increase response or retention. A possible explanation might be how these items are valued by participants, or how they perceive their time is valued. Nevertheless, this result has the potential to reduce trial costs because associated saving could be channelled towards monetary incentives that have been shown to be effective.

The evidence showed that priority post (first‐class post or equivalent) did not increase response. It is expensive as a means of communicating with participants and savings can be made by using regular (2nd‐class) post instead.

Additional reminders sent to non‐responders or as questionnaires were posted; enhanced letters, that is, letters signed by the principal investigator, or letters further explaining the anticipated length of time to complete a questionnaire, were not effective strategies to increase response. Enhanced letters and different types of additional reminders are used by trialists in current research practice. Too many reminders could be counterproductive to improving retention in randomised trials and details of the time expected to undertake specific tasks might be informative but off putting for participants. Nevertheless, letters and reminders are part of the research process and play a role in participant engagement especially if there is little face‐to‐face contact or in trials with long intervals between data collection time points.

Several strategies to increase questionnaire response need further evaluation to determine their effect but there is only a suggestion that these were effective. If participants are well and engaged with a trial, questionnaire length may not impact on response rates because participants may be happy to feedback on their condition in this way. For other conditions, for example, cancers and terminal illnesses, trial participants might prefer shorter questionnaires if their symptoms are problematic. Telephone follow‐up compared with monetary incentive sent with a questionnaire needs further evaluation possibly with a cost‐benefit analysis, as both could be expensive in time and human resources. Although appearing very effective, the total design method for postal questionnaires could be labour intensive to implement, expensive and may no longer be applicable to some participant groups (e.g. young people), or in trials using email, text or the Internet to collect data. Recorded delivery could be useful to ensure trial follow‐up supplies reach their intended destination (e.g. biomedical specimen kits and questionnaires). Careful planning of day, date and time of delivery with each participant to avoid inconvenience might be necessary but again this strategy has the potential to be burdensome for trial co‐ordinating centres and trial sites to administer. While trialists are assured that follow‐up supplies are delivered with this strategy, participants might have the added burden of an extra visit to collect supplies from a sorting post office and this could be costly.

The use of open trials to increase questionnaire response can only be applied to trials where blinding is not required and could be counterproductive if a participant or clinician has a treatment preference. Bias associated with loss to follow‐up resulting from these preferences could be avoided in blind trials.

Evaluations of strategies that encourage participants to return to trial sites for follow‐up visits and monitoring were fewer than strategies to increase response to postal and electronic questionnaires, without further evidence case management and behavioural strategies cannot be recommended for use to encourage participant return.

This review identified no trials from low‐income countries. All included studies were conducted in higher‐income countries. Therefore, the strategies to increase retention identified by this review may not be generalisable to trials conducted in low‐income countries because the interventions identified might not be socially, culturally or economically appropriate for trials run in these regions. The results may also not be applicable to all social groups as we were unable to examine response/retention by social characteristics such as economic disadvantage and social class. Most of the evidence in this review relates to increasing questionnaire follow‐up in randomised trials for either the primary or secondary outcome for the host trial. The diversity between strategies and insufficient numbers in each of these categories meant that we could not do subgroup analyses by trial setting and disease area as planned.

Quality of the evidence

The extent of unpublished trials evaluating retention strategies is unknown; however, this review includes several unpublished trials and we made an effort to capture UK‐based unpublished trials through our survey and research contacts. For some comparisons, results were based on one or two trials in a particular context. The inclusion of any further published and unpublished trials in future updates would improve the precision of the results of this review.

The six types of strategies that we identified targeted retention of trial participants in randomised trials. We believe response and retention were the relevant dichotomous outcomes to be reported for this review. Many other strategies used by trialists in practice to reduce attrition/increase response or retention in trials were not identified by this review (e.g. social support strategies; child care, Loue 2008; family support, De Sousa 2008; reduction in the number of visits, Schulz 2002). Evaluations of trial management strategies are also under‐represented in the review (e.g. evaluations of site‐specific reports, El Khorazaty 2007; levels of contact by the co‐ordinating centre, Senturia 1998; training project staff).

Both published and unpublished included retention trials were fairly well conducted but could be improved. Five of the 39 trials included in the review were quasi‐randomised. The motivation for conducting many of the included retention trials was reactive rather than planned upfront (i.e. when loss to follow‐up became a problem during trial follow‐up, rather than planned prior to host trial commencement).

Most trials used appropriate methods for randomisation or at least stated that they were randomised. For trials that did not describe their methods well or provide further information, there remains a potential risk of selection bias. Sensitivity analyses excluding quasi‐randomised trials did not affect the results. In this context, where motivating participants to provide data or attend clinics is often the target of the interventions and so appropriately influences the outcome, lack of blinding is less of a concern. Retention is the outcome and was obtained for all but two trials, so similarly, attrition and selective outcome reporting bias are unimportant. Although the retention trials were fairly well conducted, they could be improved and they were often poorly reported. This may be because they were designed when loss to follow‐up became a problem in a trial, rather than preplanned prior to host trial commencement.

Potential biases in the review process

Many words are used to describe loss to follow‐up, for example, attrition, withdrawal and questionnaire non‐response. We included these in our search strategy. We attempted to obtain unpublished trials and data by contacting authors and writing to UK clinical trials units and presenting at national and international conferences. We are confident that we have captured most studies and the spectrum of strategies that have been evaluated to date. It is conceivable, however, that less well‐reported, ongoing, unpublished trials or trials conducted outside of the UK might have been missed. Most trials used appropriate methods for sequence generation or at least stated that they were randomised and concealed allocation. There is small risk that those that did not describe their methods well or provide further information did not use adequate methods for allocation and concealment and may have biased the results. However, sensitivity analyses excluding quasi‐randomised trials did not affect the results. Blinding is hard to achieve in this context, where motivating participants to provide data or attend clinics is often the target of the interventions and so appropriately influences the outcome. 

Agreements and disagreements with other studies or reviews

The strategies that improve retention are, in some cases, the same as or similar to those found to be effective for cohort and cross‐sectional study designs. However, prior to our review, it was not clear which of these strategies could be extrapolated to randomised trials. Successful retention strategies used in other study designs may be effective in trials settings and should be tested. Edwards' review on methods to increase response to postal and electronic questionnaires included 513 trials and identified many strategies to increase response to questionnaires (Edwards 2009). Included trials were embedded in surveys, cohort studies and trials, which may explain some of the heterogeneity in effects seen in Edwards' review and reliance on the random‐effects model. Unexplained heterogeneity was not a particular problem in this review. Edwards found monetary incentives effective for increasing response to postal questionnaires (Edwards 2009). However, unlike our review, Edwards found that non‐monetary incentives were effective for postal and electronic questionnaires. Other strategies found to be effective by Edwards, in agreement with our review, included recorded delivery of questionnaires and shorter questionnaires, although in our review shorter questionnaires need further evaluation. Edwards also found that use of hand‐written addresses, stamped return envelopes as opposed to franked return envelopes and first‐class outward mailing improved response. Our review found that a 'package' including an enhanced letter incorporating several reminders was effective, but the effectiveness of first‐class/priority mail to increase response in randomised trials was unclear.

Booker's narrative review of methods to increase retention in population‐based cohort studies was based on only 11 randomised trials and no meta‐analysis (Booker 2011). The results suggested that incentives were associated with an increase in retention.

Nakash's systematic review of ways to increase response to postal questionnaires in health care focused on randomised trials of ways to increase response to postal questionnaires in healthcare research on participant populations (Nakash 2006 (2). Fifteen trials were included in this meta‐analysis, which found that reminder letters, telephone contact and short questionnaires increased response to postal questionnaires in the context of healthcare research. There was no evidence that incentives were effective. Again, this review was not exclusive to evaluations conducted in randomised trials.

The Edwards review was broad and focused specifically on methods to enhance response to questionnaires and included studies in non‐healthcare settings (Edwards 2009). The reviews by Nakash and Booker focused on retention in specific research areas, health care and cohort studies (Booker 2011; Nakash 2006 (2)). Unlike these reviews, our review focused specifically on a range of strategies evaluated within trials. Therefore, it specifically addressed the question of retention of study participants within randomised trials, which was beyond the scope of the other reviews. Application of these results would depend on trial setting, population, disease area, data collection and follow‐up procedures. Moreover, we identified additional strategies that may improve trial retention, for example, methodological strategies.

This review is the most comprehensive to date on strategies specifically designed to improve retention in randomised trials. We included seven unpublished trials and 18 other trials not included by Edwards (Edwards 2009).

Attrition study flow diagram.
Figures and Tables -
Figure 1

Attrition study flow diagram.

Comparison 1 Addition of incentive vs none: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 1.1

Comparison 1 Addition of incentive vs none: main analysis, Outcome 1 Retention.

Comparison 2 Addition of incentive: sensitivity analysis: quasi‐randomised trials removed, Outcome 1 Retention.
Figures and Tables -
Analysis 2.1

Comparison 2 Addition of incentive: sensitivity analysis: quasi‐randomised trials removed, Outcome 1 Retention.

Comparison 3 Addition of incentive: separating research arms of non‐factorial trials (three‐/four‐arm trials), Outcome 1 Retention.
Figures and Tables -
Analysis 3.1

Comparison 3 Addition of incentive: separating research arms of non‐factorial trials (three‐/four‐arm trials), Outcome 1 Retention.

Comparison 4 Addition of telephone follow‐up vs incentive, Outcome 1 Retention.
Figures and Tables -
Analysis 4.1

Comparison 4 Addition of telephone follow‐up vs incentive, Outcome 1 Retention.

Comparison 5 Addition of monetary incentive to both study arms, Outcome 1 Retention.
Figures and Tables -
Analysis 5.1

Comparison 5 Addition of monetary incentive to both study arms, Outcome 1 Retention.

Comparison 6 Addition of monetary incentive vs offer of incentive, Outcome 1 Retention.
Figures and Tables -
Analysis 6.1

Comparison 6 Addition of monetary incentive vs offer of incentive, Outcome 1 Retention.

Comparison 7 Enhanced letter versus standard letter: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 7.1

Comparison 7 Enhanced letter versus standard letter: main analysis, Outcome 1 Retention.

Comparison 8 Communication strategies letter: total design method, Outcome 1 Retention.
Figures and Tables -
Analysis 8.1

Comparison 8 Communication strategies letter: total design method, Outcome 1 Retention.

Comparison 9 Communication strategies post: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 9.1

Comparison 9 Communication strategies post: main analysis, Outcome 1 Retention.

Comparison 10 Communication strategies: additional reminder vs usual follow‐up: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 10.1

Comparison 10 Communication strategies: additional reminder vs usual follow‐up: main analysis, Outcome 1 Retention.

Comparison 11 Communication strategies additional reminder to trial site vs usual reminder (ICC 0.054), Outcome 1 Retention.
Figures and Tables -
Analysis 11.1

Comparison 11 Communication strategies additional reminder to trial site vs usual reminder (ICC 0.054), Outcome 1 Retention.

Comparison 12 Communication strategies: questionnaire administered early vs late, Outcome 1 Retention.
Figures and Tables -
Analysis 12.1

Comparison 12 Communication strategies: questionnaire administered early vs late, Outcome 1 Retention.

Comparison 13 Communication strategies: type of reminder: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 13.1

Comparison 13 Communication strategies: type of reminder: main analysis, Outcome 1 Retention.

Comparison 14 Questionnaire strategies: new vs standard questionnaire: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 14.1

Comparison 14 Questionnaire strategies: new vs standard questionnaire: main analysis, Outcome 1 Retention.

Comparison 15 Questionnaire strategies: new vs standard questionnaire: sensitivity analysis quasi‐randomised trial McColl, Outcome 1 Retention.
Figures and Tables -
Analysis 15.1

Comparison 15 Questionnaire strategies: new vs standard questionnaire: sensitivity analysis quasi‐randomised trial McColl, Outcome 1 Retention.

Comparison 16 Behavioural strategies: main analysis, Outcome 1 Retention.
Figures and Tables -
Analysis 16.1

Comparison 16 Behavioural strategies: main analysis, Outcome 1 Retention.

Comparison 17 Case management, Outcome 1 Retention.
Figures and Tables -
Analysis 17.1

Comparison 17 Case management, Outcome 1 Retention.

Comparison 18 Methodology strategies, Outcome 1 Retention.
Figures and Tables -
Analysis 18.1

Comparison 18 Methodology strategies, Outcome 1 Retention.

Table 1. Trials evaluating incentive strategies

Trial or trial comparison

Incentive groups

Control group

Outcome type

Addition of incentive vs. none

Bauer 2004ab

a) USD10 cheque

b) USD2 cheque

Arms combined for main analysis

No incentive

DNA specimen kit return plus postal questionnaire response 

Gates 2009

GBP5 voucher

No incentive

Postal questionnaire response

Kenyon 2005

GBP5 voucher

No incentive

Postal questionnaire response

Khadjesari 2011 1ac

a) Offer GBP5 voucher

c) Offer of entry into GBP250 prize draw, groups combined for main analysis

No incentive

Internet‐based questionnaire response

Khadjesari 2011 2

Offer of GBP10 Amazon.co.uk voucher

No incentive

Internet‐based questionnaire response

Bowen 2000abc

a) Certificate

b) Pin

c) Pin and certificate groups combined for main analysis

No incentive

Participants retention

Renfroe 2002a 

Certificate of appreciation

No certificate of appreciation

Postal questionnaire response

Sharp 2006a

Pen

No pen

Postal questionnaire response

Sharp 2006b

Pen

No pen

Postal questionnaire response

Sharp 2006c

Pen

No pen

Postal questionnaire response

Sharp 2006d

Pen

No pen

Postal questionnaire response

Cockayne 2005

Offer of study results

No offer

Postal questionnaire response

Hughes 1989

Offer of free reprint of results

No offer

Postal questionnaire response

Khadjesari 2011 1b

Offer of GBP5 charity donation

No offer

Internet‐based questionnaire response

Addition of monetary incentive to both groups

Bailey 1 unpublished

Offer of GBP20 shopping voucher

Offer of GBP10 shopping voucher

Postal questionnaire response

Bailey 2 unpublished

Shopping voucher: GBP10 in advance and GBP10 on data return

Shopping voucher: GBP5 in advance and GBP5 on data return

Questionnaire response and chlamydia kit return

Addition of monetary incentive vs. offer of incentive

Kenton 2007a

USD2 coin

Draw for USD50 gift voucher

Postal questionnaire response

Kenton 2007b

USD2 coin

Draw for USD50 gift voucher

Postal questionnaire response

Offer of prize draw vs. no offer

Leigh Brown 1997

Aware of monthly prize draw of GBP25 gift voucher

No offer of draw

Postal questionnaire response

Figures and Tables -
Table 1. Trials evaluating incentive strategies
Table 2. Trials evaluating communication strategies

Trial or trial comparison

Communication strategy

Control arm

Outcome type

Enhanced letter vs. standard letter

Renfroe 2002c

Cover letter signed by physician

Cover letter signed by co‐ordinator

Postal questionnaire response

Marson 2007

Letter explaining the approximate length of time to complete questionnaire

Standard letter

Postal questionnaire response

Total design method vs. customary method

Sutherland 1996

Total design method for postal follow‐up

Standard method for postal follow‐up

Postal questionnaire response

Priority vs. regular post

Renfroe 2002b

Express delivery

Standard delivery

Postal questionnaire response

Sharp 2006e

Despatch first‐class stamp

Despatch second‐class stamp

Postal questionnaire response

Sharp 2006f

Despatch first‐class stamp

Despatch second‐class stamp

Postal questionnaire response

Sharp 2006g

Second‐class return envelope

Free post return envelope

Postal questionnaire response

Sharp 2006h

Second‐class return envelope

Free post return envelope

Postal questionnaire response

Kenton 2007c

Priority mail

Standard mail

Postal questionnaire response

Kenton 2007d

Priority mail

Standard mail

Postal questionnaire response

Additional reminder vs. usual follow‐up

Ashby 2011

Electronic reminder 

No electronic reminder                          

Postal questionnaire response

MacLennan unpublished

Telephone reminder

No telephone reminder

Postal questionnaire response

Nakash 2007

Trial calendar given at recruitment with questionnaire due dates

No calendar

Postal questionnaire response

Severi 2011 1

Text message and fridge magnet both emphasising social benefits of study participation.

Text message reminder sent 3 days after questionnaire

Postal questionnaire response

Severi 2011 2

Telephone reminder from principle investigator

Standard procedures.

Return of cotinine samples

Man 2011

SMS text message as follow‐up questionnaire sent out

No SMS text message

Postal questionnaire response

Additional trial site reminder vs. usual reminder

Land 2007

Prospective monthly reminder of upcoming assessments to trial sites

No extra reminder to trial sites

Postal questionnaire response

Early vs. late administration of questionnaire

Renfroe 2002d

Questionnaire sent 2‐3 weeks after last AVID follow‐up visit

Questionnaire sent 1‐4 months after last AVID follow‐up visit

Postal questionnaire response

Recorded delivery vs. telephone reminder

Tai 1997

Recorded delivery reminder

Telephone reminder

Postal questionnaire response

Addition telephone follow‐up vs. incentive

Couper 2007

Telephone survey by trained interviewer

Postal questionnaire and USD5 bill

Post and questionnaire response

AVID: Antiarrhythmics Versus Implantable Defibrillators; SMS: short message service.

Figures and Tables -
Table 2. Trials evaluating communication strategies
Table 3. Trials evaluating new questionnaire strategies

Trial or trial comparison

Questionnaire strategy

Control arm

Outcome type

Short vs. long

Dorman 1997

Short EuroQol

Long SF‐36 questionnaire.

Postal questionnaire response

Edwards 2001 unpublished

1‐page, 7‐question functional dependence questionnaire 

3‐page, 16‐question functional dependence questionnaire

Postal questionnaire response

Svoboda 2001 unpublished

1‐page, 7‐question functional dependence questionnaire 

3‐page, 16‐question functional dependence questionnaire

Postal questionnaire response

McCambridge 2011 1b

AUDIT Short

+

LDQ

APQ

Internet‐based questionnaire response

McCambridge 2011 2b

AUDIT Short

+

LDQ

APQ

Internet‐based questionnaire response

Long and clear vs. short and condensed

Subar 2001

DHQ (36‐page food frequency questionnaire)

PLCO (16‐page food frequency questionnaire)

Postal questionnaire response and onsite completion  

 

Question order

McColl 2003 1

Asthma condition‐specific questions first followed by generic

 

Generic questions followed by condition specific

Postal questionnaire

response

McColl 2003 2

Angina condition‐specific questions followed by generic

Generic questions followed by condition specific

Postal questionnaire response

Letley 2000 unpublished

RDQ at front and SF‐36 at back

SF‐36 at front RDQ at back

Postal questionnaire response

Relevance of questionnaire

McCambridge 2011 1a

APQ23 items

CORE‐OM Mental health assessment 23/34 items

Internet‐based questionnaire response

McCambridge 2011 2a

AUDIT Short

+

LDQ

CORE‐OM Mental health assessment 10 items

Internet‐based questionnaire response

APQ: Alcohol Problems Questionnaire; AUDIT: Alcohol Use Disorders Identification Test; LDQ Leeds Dependency Questionnaire; PLCO: Prostate, Lung, Colorectal, Ovarian; SF‐36: Short Form 36 item.

Figures and Tables -
Table 3. Trials evaluating new questionnaire strategies
Table 4. Gain in number of questionnaires returned per 1000 questionnaires sent

Example of proportion of questionnaires

returned in control arm 

 

 

30%

40%

50%

60%

70%

80%

90%

Strategy to improve retention 

RR

1

RR

 

 

 

 

 

 

 

Addition of monetary incentive versus none

1.18

0.847

107

92

76

61

5

3

2

Addition of offer of monetary incentive/prize draw versus none

1.25

0.800

140

120

100

80

60

40

20

Addition of higher value monetary incentive versus addition of lower amount

1.12

0.890

77

66

55

44

33

22

11

RR: risk ratio.

Figures and Tables -
Table 4. Gain in number of questionnaires returned per 1000 questionnaires sent
Comparison 1. Addition of incentive vs none: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

14

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Addition of monetary incentive

3

3166

Risk Ratio (M‐H, Fixed, 95% CI)

1.18 [1.09, 1.28]

1.2 Addition of offer of monetary incentive/prize draw

2

3613

Risk Ratio (M‐H, Fixed, 95% CI)

1.25 [1.14, 1.38]

1.3 Addition of non‐monetary incentive

6

6322

Risk Ratio (M‐H, Fixed, 95% CI)

1.00 [0.98, 1.02]

1.4 Addition of offer of non‐monetary incentive

2

1138

Risk Ratio (M‐H, Fixed, 95% CI)

0.99 [0.95, 1.03]

1.5 Addition of offer of monetary donation to charity

1

815

Risk Ratio (M‐H, Fixed, 95% CI)

1.02 [0.78, 1.32]

Figures and Tables -
Comparison 1. Addition of incentive vs none: main analysis
Comparison 2. Addition of incentive: sensitivity analysis: quasi‐randomised trials removed

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

7

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Addition of monetary incentive

2

1022

Risk Ratio (M‐H, Fixed, 95% CI)

1.31 [1.11, 1.55]

1.2 Addition of non‐monetary incentive

5

1594

Risk Ratio (M‐H, Fixed, 95% CI)

1.00 [0.93, 1.08]

Figures and Tables -
Comparison 2. Addition of incentive: sensitivity analysis: quasi‐randomised trials removed
Comparison 3. Addition of incentive: separating research arms of non‐factorial trials (three‐/four‐arm trials)

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

14

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Addition of monetary incentive

3

3066

Risk Ratio (M‐H, Fixed, 95% CI)

1.17 [1.09, 1.27]

1.2 Offer of monetary incentive

3

4224

Risk Ratio (M‐H, Fixed, 95% CI)

1.24 [1.13, 1.37]

1.3 Addition of non‐monetary incentive

8

10793

Risk Ratio (M‐H, Fixed, 95% CI)

1.00 [0.98, 1.01]

Figures and Tables -
Comparison 3. Addition of incentive: separating research arms of non‐factorial trials (three‐/four‐arm trials)
Comparison 4. Addition of telephone follow‐up vs incentive

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Telephone survey vs. monetary incentive and questionnaire

1

700

Risk Ratio (M‐H, Fixed, 95% CI)

1.08 [0.94, 1.24]

Figures and Tables -
Comparison 4. Addition of telephone follow‐up vs incentive
Comparison 5. Addition of monetary incentive to both study arms

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

2

902

Risk Ratio (M‐H, Fixed, 95% CI)

1.12 [1.04, 1.22]

1.1 Addition of GBP10 plus offer of GBP10 vs. addition of GBP5 plus offer of GBP5

1

485

Risk Ratio (M‐H, Fixed, 95% CI)

1.16 [1.04, 1.30]

1.2 Addition of GBP20 voucher offer vs. addition of GBP10 voucher offer

1

417

Risk Ratio (M‐H, Fixed, 95% CI)

1.08 [0.97, 1.21]

Figures and Tables -
Comparison 5. Addition of monetary incentive to both study arms
Comparison 6. Addition of monetary incentive vs offer of incentive

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

2

297

Risk Ratio (M‐H, Fixed, 95% CI)

1.04 [0.91, 1.19]

1.1 Addition of monetary incentive vs. offer of entry into prize draw

2

297

Risk Ratio (M‐H, Fixed, 95% CI)

1.04 [0.91, 1.19]

Figures and Tables -
Comparison 6. Addition of monetary incentive vs offer of incentive
Comparison 7. Enhanced letter versus standard letter: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

2

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Enhanced letter vs. standard letter

2

2479

Risk Ratio (M‐H, Fixed, 95% CI)

1.01 [0.97, 1.05]

Figures and Tables -
Comparison 7. Enhanced letter versus standard letter: main analysis
Comparison 8. Communication strategies letter: total design method

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Total design method for postal questionnaires vs. customary method

1

226

Risk Ratio (M‐H, Fixed, 95% CI)

1.43 [1.22, 1.67]

Figures and Tables -
Comparison 8. Communication strategies letter: total design method
Comparison 9. Communication strategies post: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

7

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Priority vs. regular post

7

1888

Risk Ratio (M‐H, Fixed, 95% CI)

1.02 [0.95, 1.09]

Figures and Tables -
Comparison 9. Communication strategies post: main analysis
Comparison 10. Communication strategies: additional reminder vs usual follow‐up: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

6

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Additional reminder vs. usual follow‐up procedures

6

3401

Risk Ratio (M‐H, Fixed, 95% CI)

1.03 [0.99, 1.06]

Figures and Tables -
Comparison 10. Communication strategies: additional reminder vs usual follow‐up: main analysis
Comparison 11. Communication strategies additional reminder to trial site vs usual reminder (ICC 0.054)

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (Fixed, 95% CI)

Subtotals only

1.1 Monthly reminder of upcoming assessments to trial site vs. usual reminders

1

272

Risk Ratio (Fixed, 95% CI)

0.96 [0.83, 1.11]

Figures and Tables -
Comparison 11. Communication strategies additional reminder to trial site vs usual reminder (ICC 0.054)
Comparison 12. Communication strategies: questionnaire administered early vs late

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Early vs. late administration

1

664

Risk Ratio (M‐H, Fixed, 95% CI)

1.10 [0.96, 1.26]

Figures and Tables -
Comparison 12. Communication strategies: questionnaire administered early vs late
Comparison 13. Communication strategies: type of reminder: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Recorded delivery vs. telephone reminder

1

192

Risk Ratio (M‐H, Fixed, 95% CI)

2.08 [1.11, 3.87]

Figures and Tables -
Comparison 13. Communication strategies: type of reminder: main analysis
Comparison 14. Questionnaire strategies: new vs standard questionnaire: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

10

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Short vs. long questionnaire

5

7277

Risk Ratio (M‐H, Fixed, 95% CI)

1.04 [1.00, 1.08]

1.2 Long and clear vs. short and condensed questionnaires

1

900

Risk Ratio (M‐H, Fixed, 95% CI)

1.01 [0.95, 1.07]

1.3 Question order: condition first vs. generic first questions

2

9435

Risk Ratio (M‐H, Fixed, 95% CI)

1.00 [0.97, 1.02]

1.4 Questionnaire: relevant vs. less relevant to condition

2

3893

Risk Ratio (M‐H, Fixed, 95% CI)

1.07 [1.01, 1.14]

Figures and Tables -
Comparison 14. Questionnaire strategies: new vs standard questionnaire: main analysis
Comparison 15. Questionnaire strategies: new vs standard questionnaire: sensitivity analysis quasi‐randomised trial McColl

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

8

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Short vs. long questionnaire

5

7277

Risk Ratio (M‐H, Fixed, 95% CI)

1.04 [1.00, 1.08]

1.2 Long and clear vs. short and condensed questionnaires

1

900

Risk Ratio (M‐H, Fixed, 95% CI)

1.01 [0.95, 1.07]

1.3 Questionnaire: relevant vs. less relevant to condition

2

3893

Risk Ratio (M‐H, Fixed, 95% CI)

1.07 [1.01, 1.14]

Figures and Tables -
Comparison 15. Questionnaire strategies: new vs standard questionnaire: sensitivity analysis quasi‐randomised trial McColl
Comparison 16. Behavioural strategies: main analysis

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

2

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Motivation vs. information

2

273

Risk Ratio (M‐H, Fixed, 95% CI)

1.08 [0.93, 1.24]

Figures and Tables -
Comparison 16. Behavioural strategies: main analysis
Comparison 17. Case management

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Case management vs. usual follow‐up

1

703

Risk Ratio (M‐H, Fixed, 95% CI)

1.00 [0.97, 1.04]

Figures and Tables -
Comparison 17. Case management
Comparison 18. Methodology strategies

Outcome or subgroup title

No. of studies

No. of participants

Statistical method

Effect size

1 Retention Show forest plot

1

Risk Ratio (M‐H, Fixed, 95% CI)

Subtotals only

1.1 Open vs. blind trial design

1

538

Risk Ratio (M‐H, Fixed, 95% CI)

1.37 [1.16, 1.63]

Figures and Tables -
Comparison 18. Methodology strategies