Randomisation
Randomisation eliminates investigator bias in allocating patients to different treatment arms.
The most effective randomisation is done centrally by a computer. Non-randomised or inadequately randomised trials tend to overestimate treatment effects [1]. Randomisation methods should be clearly described in the ‘methods’ section of a clinical trial.
Many systematic reviews and meta-analyses only include randomised trials due to the risk of introducing bias through inclusion of poor methodological studies.
Blinding
Blinding can avoid bias due to patient or observer knowledge of treatment allocation. Blinding can be through use of placebo or sham treatment. In a single blind study, only the patient or observer is unaware of the treatment allocation.
A double-blind study design is preferable. If the observer or investigator who is measuring treatment effect, or interpreting a diagnostic investigation, is unaware of the treatment allocation then they are unable to influence a study result through a conscious or unconscious expectation of treatment effect.
Some studies are unable to be blinded, or blinding may be unethical. For example, in a clinical trial of chemotherapy versus best supportive care in a population with a previously untreatable cancer, it will be impossible to blind patients and doctors to the obvious side effects of the active treatment.
Adequate sample size
An adequate sample size decreases the likelihood that the findings are due to the play of chance. Results of smaller clinical trials are likely to deviate further from a true treatment effect.[2]
The use of a funnel plot (see Glossary) in meta-analyses uses this effect to suggest whether small, negative trials are likely to have been withheld from publication.
Published reports should include a paragraph on sample size calculation which includes a description of the α, β, anticipated effects of a treatment on the primary endpoint, and the proposed sample size.
Validated outcome measures
Validated outcome measures are important to avoid bias due to incorrect interpretation of change in an endpoint.
Outcomes like survival, or rate of myocardial infarction, may seem easy to define. However, check whether survival included all-cause mortality or only that due to the disease in question. Check that clinical endpoints such as myocardial infarction have been defined in the study by a set of agreed criteria.
Patient-rated outcome measures such as quality of life questionnaires should be statistically validated, and validation referenced, unless the study has included a statistical validation as part of the design.
Correct statistical analysis
A detailed understanding of biostatistics is beyond the scope of this site. However, reporting of clinical trial results should concentrate on the endpoints defined before the study began. Post-hoc, exploratory analyses should be regarded with caution.
Remember that 1 in 20 statistical analyses is likely to be significant at p=0.05 through the play of chance alone. A more stringent p value of 0.01 or less may be set for significance where the researchers have looked at multiple comparisons.
Intention to treat analysis
‘Intention to treat’ means that all patients have been analysed for efficacy endpoints in the arms to which they were allocated. This is ‘best practice’ for randomised clinical trials. However, side effects (toxicities) should be analysed according to the treatment actually received by the patient.
The primary endpoint
There should always be an important, relevant, and valid primary endpoint. The sample size is usually calculated to identify clinically and statistically significant changes in the primary endpoint.
A surrogate endpoint
If a surrogate endpoint is used, it should be strongly and consistently associated with the important clinical endpoint.
The most appropriate control arm
Ensure that the most appropriate control arm or gold standard test has been used.
1. KF Schultz, I Chalmers, RJ Hayes, DG Altman. Empirical evidence of bias: Dimensions of
methodological quality associated with estimates of treatment effects in controlled trials. Journal of the
American Medical Association 1995 273: 408-12.
2. Moore RA, Gavaghan D, Tramèr MR, Collins SL, McQuay HJ. Size is everything - large amounts of
information are needed to overcome random effects in estimating direction and magnitude of treatment
effects. Pain 1998, 78: 217-220.