Description
Consider the examples of research questions below:
Does renal transplant nurse coordinators’ responsibilities influence live donor rates?
How, if at all, are the timing and location of suicide prevention appointments linked to veterans’ suicide rates?
“Examples of Research Questions | PhD | School of Nursing | Vanderbilt University,” n.d.)
Reference
Examples of Research Questions | PhD | School of Nursing | Vanderbilt University. (n.d.). Retrieved from Vanderbilt School of Nursing
Choose one of the two sample questions above and apply what you learned in the below readings to explain:
1. The process of developing a researchable question.
2. What type of background evidence and defined problem served as a base of the researchable question.
3. Think about what will happen after the question is formulated.
Conduction Health Services Research
“Conducting Health Services Research” from the South Sudan Medical Journal
Performing a Literature Review
Health Service Research: Scope and Significance
Formulating a Researchable Question: A Critical Step for Facilitating good Clinical Research
Boxplots
Unformatted Attachment Preview
Injury, Int. J. Care Injured 41S (2010) S3–S6
Contents lists available at ScienceDirect
Injury
journal homepage: www.elsevier.com/locate/injury
Asking good clinical research questions and choosing the right study design
P. Bragge *
The NET Program: Neurotrauma Evidence Translation, National Trauma Research Institute & Monash University, Level 6, 99 Commercial Road, Melbourne, VIC 3004, Australia
A R T I C L E I N F O
A B S T R A C T
Keywords:
Research question development
Research design
Biomedical research
Clinicians and researchers seek answers to clinical research questions, primarily by accessing the results
of clinical research studies. This paper moves the focus of research enquiry from getting answers to
developing good clinical research questions. Using worked examples, the steps involved in refining
questions drawn from various sources to create ‘answerable’ clinical research questions using the ‘PICO’
principle are described. Issues to consider in prioritising clinical research questions are also identified.
Theoretical and practical considerations involved in choosing the right study design for a clinical
research question are then discussed using the worked examples. These include:
Categorisation of questions according to their central clinical issues;
Use of preliminary literature searching to identify existing research and further refine questions;
Identifying whether a quantitative or qualitative research paradigm is best suited to a research
question;
Hierarchies of evidence that rank study designs and how they vary according to central clinical issues;
Other factors influencing study design selection.
ß 2010 Elsevier Ltd. All rights reserved.
What this topic is about
1. Asking good questions:
a. Sources and examples of questions.
b. Which questions should be pursued?
c. What is an ‘answerable’ question?
2. Choosing the right study design:
a. What is the question about?
b. Has the question been answered?
c. What research approach is appropriate?
Common problems and challenges
1. Researchers and clinicians encounter a large number and range
of research questions;
2. Prioritising these questions can be challenging;
3. These questions are not often in a form that is ‘answerable’ from
a research perspective;
4. Inadequate consideration of the meaning, structure and
intention of research questions can have serious impacts on
the subsequent research process.
* Tel.: +61 3 9076 8813; fax: +61 3 9076 8811.
E-mail address: [email protected].
0020–1383/$ – see front matter ß 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.injury.2010.04.016
Tips for researchers
Before embarking on a research project:
1. Prioritise questions from various sources;
2. Specify, refine and structure questions so that they are
answerable using the PICO principle;
3. Determine the central clinical issue covered by the question;
4. Use literature searching to establish how the question has been
addressed and if necessary, refine the question;
5. Match the question to the appropriate research paradigm and
study design, with consideration of resource, feasibility, ethical
and topic-specific issues.
Introduction
‘‘Judge of a man by his questions rather than by his answers’’
(Voltaire, French author, humanist, rationalist, & satirist; 1694–
1778).4
Users of medical research, especially clinical practitioners,
focus primarily on accessing results of clinical studies to answer
clinical questions, most frequently ‘‘Does this intervention work?’’
Comparatively little attention is paid to the questions themselves.
Yet failure to think carefully about the meaning, structure and
intention of research questions can have adverse effects on every
S4
P. Bragge / Injury, Int. J. Care Injured 41S (2010) S3–S6
subsequent step of the research process, potentially compromising
the answers. The fundamental purpose of asking good questions is
to match these to an appropriate and feasible study design.
This paper outlines principles and strategies for asking good
clinical research questions and identifying appropriate research
paradigms and designs.
Table 1
Sources and examples of clinical questions.
Source
Example
Patients
Can I return to work after my brain injury
rehabilitation? How long will this pain last?
What is the best way to prevent and manage
spasticity in Traumatic Brain Injury (TBI) patients?
What is the impact of Spinal Cord Injury (SCI) on
patients and their families?
Why did you do a CT instead of an MRI for this
TBI patient?
Why should we fund physiotherapy for patients
following discharge from SCI rehabilitation?
Own clinical/research
experience
Asking good questions
Colleagues
Sources and examples of questions
Funders
Clinicians and researchers encounter a range of external and selfgenerated questions on a daily basis. As outlined in Table 1, the focus
and nature of these questions varies according to the perspective of
the stakeholder. Patients focus on issues of most relevance to their
specific situation, such as relief of symptoms; the clinician or
researcher considers broader issues, for example choosing from a
range of intervention options; colleagues and funders seek
justification of interventions and funding allocation, respectively.
Which questions should be pursued?
Because all of the above perspectives and questions are equally
valid and important, the task of prioritising these is challenging for
clinicians and researchers. This task is influenced by a range of
factors including time and resource limitations, clinical urgency,
organisational or local research agendas and funding sources.
Straus et al.6 have identified these factors in the following five
question ‘filters’:
1. Importance of question to the patient’s biologic, psychologic or
sociologic well-being.
2. Relevance of question to you/your learners’ knowledge needs.
3. Feasibility of answering question in the time available.
4. Level of your/your learner’s/your patient’s interest in question.
5. Likelihood of question recurring in your practice.
Consideration of these can be helpful in identifying which
questions to pursue.
What is an ‘answerable’ question?
An ‘answerable’ question in research terms is one which seeks
specific knowledge, is framed to facilitate literature searching and
therefore, follows a semi-standardised structure.6,3
The many questions encountered by researchers and clinicians
are often not ‘answerable’ in research terms. The generating of an
answerable clinical research question often involves a degree of
specification, or ‘funnelling’, whereby a large topic is broken down
into a smaller, more manageable topic or topics. Many researchers’
first experience of the research process is a quest to ‘solve all the
problems of the world’, followed by consultation with supervisors
or senior researchers which results in substantial focusing of broad
initial ideas. This process is critical, as a methodological approach
used to address a question that is too broad may lack rigor.
Furthermore, lack of focus in a research question creates
inefficiencies in the research process itself. It is extremely difficult
to recalibrate and reframe a research question once the process of
seeking an answer is underway.
‘Answerable’ clinical research questions have four essential
‘PICO’ components6:
1. P: Patient and/or problem;
2. I: Intervention (or exposure, diagnostic test, prognostic factor,
etc.);
3. C: Comparison Intervention (if relevant);
4. O: Outcome.
The general format of a ‘PICO’ question is:
‘‘In [Population], what is the effect of [Intervention] on
[Outcome], compared with [Comparison Intervention]?’’
The ‘PICO’ question components are framed with ‘Intervention’
questions in mind, but several question types can follow a similar
format. The key principle of the ‘PICO’ approach is that important
components of the question are identified and defined or specified.
Table 2 shows how using the PICO approach facilitates ‘drilling
down’ of the questions identified earlier (Table 1). In the first two
examples, interventions are identified; in the second a measureable outcome is also specified; in the third, the ‘intervention’ is
identified as diagnostic rather than therapeutic, which has
important implications for study design.
There are some cases in which departure from the ‘generic’ PICO
format is warranted. For example, an answerable version of the
question ‘‘Can I return to work after my brain injury rehabilitation?’’ may be: ‘‘What prognostic factors influence return to work in
patients following TBI rehabilitation?’’ In other cases, the original
question is sufficiently specific that no modifications are required,
such as ‘‘What is the impact of SCI on patients and their families?’’
Although the final question is not strictly in the ‘PICO’ format in
these examples, the key principles of identifying and specifying the
important question elements have been considered.
Choosing the right study design
What is the question about?
The development of an answerable clinical research question
using the ‘PICO’ principles facilitates the important process of
categorising the question according to its central clinical issue,6 as
illustrated in Table 3. This table shows that therapeutic interventions are only one of a number of clinical issues that can be
Table 2
Use of the ‘PICO’ principle to convert clinical questions to answerable clinical research questions.
Original question
Answerable clinical research question (PICO elements in italics)
What is the best way to prevent and manage spasticity in TBI patients?
In patients with severe TBI, what is the effect of casting on spasticity, compared with
pharmacological management?
In patients following SCI rehabilitation, what is the effect of community-based
physiotherapy on functional status, compared with standard care?
In patients with suspected TBI, what is the diagnostic value of CT, compared with MRI?
Why should we fund physiotherapy for patients following discharge
from SCI rehabilitation?
Why did you do a CT instead of an MRI for this TBI patient?
P. Bragge / Injury, Int. J. Care Injured 41S (2010) S3–S6
S5
Table 3
Categorisation of clinical research questions according to central clinical issues.
Clinical research question
Category6
In patients with severe TBI, what is the effect of casting on spasticity, compared with
pharmacological management?
In patients with suspected TBI, what is the diagnostic value of CT, compared with MRI?
Therapy: selecting treatments that are effective and worthwhile
What prognostic factors influence return to work in patients following TBI rehabilitation?
What is the impact of SCI on patients and their families?
Diagnostic tests: selecting diagnostic tests with acceptable precision,
safety, expense, etc.
Prognosis: estimating likely clinical course and anticipating
complications
Experience and meaning: empathy and understanding of patient
situations
addressed by clinical research questions. There are many other
categories in addition to those identified in Table 3, including
aetiology, prevention and differential diagnosis.6 This process of
categorisation is an important precursor to considerations of study
design.
literature search at the question development stage. However, an
investment of time at this point in the research process more than
offsets the potential time and resources wasted in pursuing an
inappropriate question, or one that has been comprehensively
addressed already.
Has the question been answered?
What research approach is appropriate?
Once the key elements of the question have been specified and
the broad question category identified, it is important to identify
how this or similar questions have been addressed by existing
published research. A systematic database search of appropriate
major medical databases can:
There are two broad research paradigms: quantitative and
qualitative. Most biomedical studies are quantitative; that is,
numerical data is collected and analysed. However, numbers and
statistics are not always the most appropriate approach to a
clinical research question. Where research questions pertain to
subjective phenomena such as feelings, attitudes and emotional
responses, a qualitative research paradigm should be used.
Qualitative research emphasises in-depth exploration and description, rather than numerical measurement, of variables.8 This
results in a rich and deep understanding of the topic under study.2
Qualitative and quantitative research paradigms have distinct
methodological underpinnings that influence every aspect of study
conduct including sampling, data collection and data analysis. It is
therefore critical to match the research paradigm to the clinical
research question prior to more in-depth consideration of study
design (as described below) to ensure that the eventual study
results are valid and useful. Such considerations apply to both
primary and secondary (literature review) research.
Table 4 summarises key differences between qualitative and
quantitative research approaches using two clinical research
questions described earlier. An in-depth description of these
differences is beyond the scope of this paper.
Identify how many primary and secondary studies address the
question;
Identify PICO elements and definitions;
Help to determine the feasibility of answering the question using
primary or secondary research;
In doing so, focus and refine the question.
This continuation of the ‘funnelling’ of the question may result
in refinement or alteration of some PICO elements. For example, a
search for articles addressing the question ‘‘In patients following
SCI rehabilitation, what is the effect of community-based
physiotherapy on functional status, compared with standard
care?’’ could raise the following issues:
Existing literature may focus on specific subgroups of SCI such as
quadriplegics or paraplegics, necessitating population refinement;
Outcomes other than function may be more widely reported in
the literature, prompting consideration of whether function is
the most appropriate outcome;
Particular study designs such as Randomised Controlled Trials
(RCTs) may not be represented in relevant literature, raising
questions of feasibility or ethical limitations to using such
designs;
There may be a large body of literature addressing this question
but no systematic review, in which case a systematic review may
be more useful than another primary study.
Many clinicians and researchers, particularly those not engaged
in evidence-based medicine or systematic reviewing, baulk at the
notion of spending their limited time performing an in-depth
What study design is appropriate?
There are numerous quantitative and qualitative study designs.
Because most biomedical study designs are quantitative, this
section will focus on the quantitative research paradigm.
The most appropriate quantitative study design for a given clinical
research question is dependent upon the nature of the question being
asked. As discussed earlier, questions pertain to a variety of central
clinical issues such as therapy, diagnosis and prognosis (see Table 3).
Each of these issues belongs in a distinct quantitative research
category, for which a range of study designs is possible.
Study designs are often ranked from most to least robust in a
‘hierarchy of evidence’. For the central clinical issue ‘therapy’
identified in Table 3, the hierarchy of evidence for this research
Table 4
Example of quantitative and qualitative clinical research questions.
Clinical research question
Data
Approach
Design example
Analysis example
In patients following SCI rehabilitation, what is the effect of community-based
physiotherapy on functional status [Functional Independence Measure; FIM score]
compared with standard care?
What is the impact [emotional responses, attitudes] of SCI on patients and
their families?
Numerical
Quantitative
RCT
Statistical
Non-numerical
Qualitative
Focus group
Qualitative content
analysis
S6
P. Bragge / Injury, Int. J. Care Injured 41S (2010) S3–S6
category ranks a systematic review of RCTs highest, followed by
RCT (the highest ranked primary study), Pseudo-RCT, NonRandomised Controlled study, and Case Series designs.1 Importantly, hierarchies of evidence differ according to research
category. For example, if the central clinical issue is ‘prognosis’
(Table 3), a Prospective Cohort Study – not an RCT – is the highest
ranked primary study design for this research category.1 There are
many published ‘hierarchies of evidence’ with varying study
design descriptions and categories, predominantly dealing with
‘therapy’ or intervention-based research. However, a good example of a hierarchy that illustrates the principle of how study design
varies according to research category is the hierarchy of the
Australian National Health and Medical Research Council
(NHMRC).1 This document also has extensive explanatory notes.
The choice of study design is influenced by a range of factors
other than ranking in a hierarchy of evidence. These include
resources (staff, infrastructure, time), feasibility and ethical
considerations. In some cases, specifics of the population,
condition or intervention understudy also influence study design.
For example, the generally low number of SCI patients at any one
clinical centre has necessitated the creation of networks for
multicentre studies. Furthermore, the heterogeneity of SCI as a
condition leads to consideration of further issues such as
specificity and stratification.7
Conclusion
This paper has examined the issue of question development by
considering two key principles: asking good questions and
choosing the right study design. Identification and consideration
of these principles is a critical first step in the research process.
Giving careful thought to these issues can substantially focus
emerging questions and aid in the determination of how they may
be best addressed in research terms.
However, the nature of human enquiry, combined with the
complexity of medicine, is such that no matter how well refined
and structured a clinical research question is, and how comprehensively it has been answered by a single study:
‘‘The outcome of any serious research can only be to make two
questions grow where only one grew before’’ (Thorstein Veblen, US
economist & social philosopher; 1857–1929).5
Disclosure statement
The author has no conflicts of interest to declare in relation to
this paper.
References
1. Coleman K, Grimmer-Somers K, Hillier S, et al. NHMRC additional levels of
evidence and grade for recommendations for developers of guidelines: Stage 2
consultation. NHMRC (National Health and Medical Research Council); 2008.
Available at http://www.nhmrc.gov.au/guidelines/consult/consultations/add_levels_grades_dev_guidelines2.htm [Accessed November 12, 2009].
2. Giacomini MK, Cook DJ. Users’ guides to the medical literature. XXIII. Qualitative
research in health care B. What are the results and how do they help me care for my
patients? Evidence-Based Medicine Working Group. JAMA 2000;284:478–82.
3. Flemming K. Asking answerable questions. Evidence-Based Nursing 1998;1:36–7.
4. QuotationsPage.com. Voltaire Quote; 2007. Available at http://www.quotationspage.com/quote/28697.html [Accessed November 12, 2009].
5. QuotationsPage.com. Veblen quote; 2007. Available at http://www.quotationspage.com/quote/32057.html [Accessed November 12, 2009].
6. Straus S, Richardson W, Glasziou P, Haynes R. Evidence-based medicine: how to
practice and teach EBM. Edinburgh: Elsevier; 2005, 16, 20, 21.
7. Tator CH. Review of treatment trials in human spinal cord injury: issues,
difficulties, and recommendations. Neurosurgery 2006;59:957–82 [discussion
82–7].
8. Thomas RM. Blending qualitative and quantitative research methods in theses
and dissertations, 1st ed., Thousands Oaks: Corwin Press, Inc.; 2003.
Unit Five: Asking an Answerable Question
Learning Objectives
To understand the importance of formulating an answerable question
To be able to formulate an answerable question
Reviewers should seek to answer two questions within their review:
1. Does the intervention work (not work)?
2. How does the intervention work?
Importance of getting the question right
A clearly framed question will guide:
the reader
o in their initial assessment of relevance
the reviewer on how to
o collect studies
o check whether studies are eligible
o conduct the analysis.
Therefore, it is important that the question is formulated before beginning the review. Post‐hoc
questions are also more susceptible to bias than those questions determined a priori. Although
changes to the review question may be required, the reasons for making the changes should be
clearly documented in the completed review.
Components of an answerable question (PICO)
The formula to creating an answerable question is following PICO; Population, Intervention,
Comparison, Outcome. It is also worthwhile at this stage to determine the types of study designs to
include in the review; PICOT.
Qualitative research can contribute to framing the review question (eg. selecting interventions and
outcomes of interest to participants). The Advisory Group can also provide valuable assistance with
this task.
Population(s)
In health promotion and public health this may include populations, communities or individuals.
Consider whether there is value in limiting the population (eg. street youth, problem drinkers). These
groups are often under‐studied and may be different in all sorts of important respects from study
populations usually included in health promotion and public health reviews.
Reviews may also be limited to the effects of the interventions on disadvantaged populations in order
to investigate the effect of the interventions on reducing inequalities. Further information on reviews
addressing inequalities is provided below.
19
Intervention(s)
As described earlier, reviewers may choose to lump similar interventions in a review, or split the
review by addressing a specific intervention. Reviewers may also consider ‘approaches’ to health
promotion rather than topic‐driven interventions, for example, peer‐led strategies for changing
behaviour. In addition, reviewers may want to limit the review by focusing on the effectiveness of a
particular type of theory‐based intervention (eg. Transtheoretical model) for achieving certain health
outcomes (eg. smoking cessation).
Comparison(s)
It is important to specify the comparison intervention for the review. Comparison interventions may
be no intervention, another intervention or standard care/practice. The choice of comparison or
control has large implications for the interpretation of results. A question addressing one intervention
versus no intervention is a different question than one comparing one intervention versus standard
care/practice.
Example: DiCenso A, Guyatt G, Willan A, Griffith L. Interventions to reduce unintended pregnancies
among adolescents: systematic review of randomised controlled trials. BMJ 2002;324:1426‐34.
The majority of the studies included in this review address primary prevention of unintended
pregnancy versus standard care/practice. Therefore, this review is not addressing whether primary
prevention is effective, it is simply investigating the effect of specific interventions compared to
standard practice. This is a much smaller gap to investigate an effect, as it is usually easier to find a
difference when comparing one intervention versus no intervention.
Intervention
Effect
Effect
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
Standard practice
Effect
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
No intervention
Effect
‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐‐
Figure Two. The difference between comparing the effect of one intervention versus no
intervention and one intervention versus standard practice.
For example, many of the school‐based interventions in the review are compared to normal sexual
education in the schools, and are shown to be ineffective for reducing unintended pregnancies. Yet
the interpretation of the results read “primary prevention strategies do not delay the initiation of
sexual intercourse or improve the use of birth control among young men and women”. This reads
that the review question has sought to address primary prevention versus no intervention. Rather,
the review addressed whether theory‐led interventions are more effective than standard
care/practice.
Outcome(s)
The outcome(s) chosen for the review must be meaningful to the users of the review. The discrepancy
between the outcomes and interventions that reviewers choose to include in the review and the
outcomes and interventions that lay people prefer to be included has been well‐described.1
20
To investigate both the implementation of the intervention and its effects reviewers will need to
include both process indicators as well as outcome measures. Unanticipated (side‐effects) as well as
anticipated effects should be investigated in addition to cost‐effectiveness, where appropriate.
Reviewers will also need to decide if proximal/immediate, intermediate or distal outcomes are to be
assessed. If only intermediate outcomes are measured (eg. blood sugar levels in persons with
diabetes, change in knowledge and attitudes) reviewers need to determine how strong the linkage is
to more distal outcomes (eg. cardiovascular disease, behaviour change). The use of theory can assist
with determining this relationship. In addition, reviewers should decide if only objective measures
are to be included (eg. one objective measure of smoking status is saliva thiocyanate or alveolar
carbon monoxide) or subjective measures (eg. self‐reported smoking status), or a combination of both
(discussing the implications of this decision).
Examples of review questions
Poorly designed questions:
1. Are condoms effective in preventing HIV?
2. Which interventions reduce health inequalities among people with HIV?
Answerable questions:
1. In men who have sex with men, does condom use reduce the risk of HIV transmission?
2. In women with HIV, do peer‐based interventions reduce health inequalities?
Are mass media interventions effective in preventing smoking in young people?
Problem,
Intervention
Comparison
Outcome
population
Young
1. Television
No
1. objective
people,
2. Radio
intervention
measures of
under 25
3. Newspapers
smoking
years of age
4. Billboards
2. self‐reported
5. Posters
smoking
6. Leaflets
behaviour
7. Booklets
3. Intermediate
measures
(intentions,
attitudes,
knowledge)
4. Process
measures (eg.
media reach)
Types of
studies
1. RCT (and
quasi‐RCT)
2. Controlled
before and
after
studies
3. Time series
designs
Types of study designs to include
The decisions about which type(s) of study design to include will influence subsequent phases of the
review, particularly the search strategies, choice of quality assessment criteria, and the analysis stage
(especially if a statistical meta‐analysis is to be performed).
The decision regarding which study designs to include in the review should be dictated by the
intervention (the review question) or methodological appropriateness, and not vice versa.2,3 If the
review question has been clearly formulated then knowledge of the types of study designs needed to
21
answer it should automatically follow.3 If different types of study designs are to included in the same
review the reasons for this should be made explicit.
Effectiveness studies
Where RCTs are lacking, or for issues relating to feasibility and ethics are not conducted, other study
designs such as non‐randomised controlled trials, before and after studies, and interrupted time
series designs should also be considered for inclusion in the review.
Comparisons with historical controls or national trends may be included when this is the only type of
evidence that is available, for example, in reviews investigating the effectiveness of policies, and
should be accompanied by an acknowledgement that the evidence of evidence is necessarily weaker.
Randomised controlled trial
Subjects are randomly allocated to groups either for the intervention being studied or the control
(using a random mechanism, such as coin toss, random number table, or computer‐generated
random numbers) and the outcomes are compared.1
Each participant or group has the same chance of receiving each intervention and the investigators
cannot predict which intervention is next.
Quasi‐randomised controlled trial / pseudo‐randomised controlled trial
Subjects are allocated to groups for intervention or control using a non‐random method (such as
alternate allocation, allocation of days of the week, or odd‐even study numbers) and the outcomes are
compared.1
Controlled before and after study / cohort analytic
Outcomes are compared for a group receiving the intervention being studied, concurrently with
control subjects receiving the comparison intervention (eg, usual or no care/intervention).1
Uncontrolled before and after study / cohort study
The same group is pre‐tested, given an intervention, and tested immediately after the intervention.
The intervention group, by means of the pre‐test, act as their own control group.2
Interrupted time series
A time series consists of multiple observations over time. Observations can be on the same units (eg.
individuals over time) or on different but similar units (eg. student achievement scores for particular
grade and school). Interrupted time series analysis requires knowing the specific point in the series
when an intervention occurred.2 These designs are commonly used to evaluate mass media
campaigns.
Qualitative research
Qualitative research explores the subjective world. It attempts to understand why people behave the
way they do and what meaning experiences have for people. Qualitative research relevant to
effectiveness reviews may include the following:
Qualitative studies of experience: these studies may use a range of methods, but frequently rely on in‐
depth tape‐recorded interviews and non‐participant observational studies to explore the experience
of people receiving an intervention.
22
Process evaluations: these studies can be included within the context of the effectiveness studies. These
evaluations use a mixture of methods to identify and describe the factors that promote and/or impede
the implementation of innovation in services.3
References:
1.
2.
3.
NHMRC (2000). How to review the evidence: systematic identification and review of the
scientific literature. Canberra: NHMRC.
Thomas H. Quality assessment tool for quantitative studies. Effective Public Health Practice
Project. McMaster University, Toronto, Canada.
Undertaking Systematic Reviews of Research on Effectiveness. CRD’s Guidance for those
Carrying Out or Commissioning Reviews. CRD Report Number 4 (2nd Edition). NHS Centre for
Reviews
and
Dissemination,
University
of
York.
March
2001.
http://www.york.ac.uk/inst/crd/report4.htm
Cluster‐RCTs and cluster non‐randomised studies
Allocation of the intervention by group or cluster is being increasingly adopted within the field of
public health because of administrative efficiency, lessened risk of experimental contamination and
likely enhancement of subject compliance.4 Some studies, for example a class‐based nutrition
intervention, dictate its application at the cluster level.
Interventions allocated at the cluster (eg. school, class, worksite, community, geographical area) level
have particular problems with selection bias where groups are formed not at random but rather
through some physical, social, geographic, or other connection among their members.5,6 Cluster trials
also require a larger sample size than would be required in similar, individually allocated trials
because the correlation between cluster members reduces the overall power of the study.5 Other
methodological problems with cluster‐based studies include the level of intervention differing from
the level of evaluation (analysis) and the often small number of clusters in the study.7 Issues
surrounding cluster trials have been well described in a Health Technology Assessment report7,
which should be read for further information if cluster designs are to be included in a systematic
review.
The role of qualitative research within effectiveness reviews
– to “provide an in‐depth understanding of people’s experiences, perspectives and histories in
the context of their personal circumstances and settings”8
Qualitative studies can contribute to reviews of effectiveness in a number of ways including9: