Description
Research Questions
In the course of conducting research, you want to solve problems and answer questions. At this point, you have an annotated bibliography with a literature review, problem and purpose statements. The next step is to determine what questions you can answer with your research proposal that relate to your problem and purpose.
The questions you ask must be focused and pointed; generalized questions are not useful. The questions need to matter; your questions need to answer pressing questions raised in the course of your research. In writing the answers to your questions, you have the opportunity to fill in where the hypotheses left off regarding the impact of your research.
Although it is recommended that a single, central research question be developed to guide a research study; this isn’t always possible. Still, it is a good idea to limit the number of research questions to avoid scope creep.
A possible starting point for your questions can be your hypotheses. Consider the algorithm example and the hypotheses that could be made based on execution time, memory utilization, and accuracy. Your questions need to be focused and concise. Questions need to be open-ended; that is, they do not have short and simple answers. Your questions need to be of sufficient complexity to require a structured response. That structured response might include some of the data analysis you are doing or some graphs you have made. Questions involving a long and involved answer are good.
Some examples of questions based on the algorithm example:
Does memory utilization have an effect on the execution time of the algorithm?
Is there a discernible relationship between algorithm accuracy and they type of input to the algorithm?
Does the use of the algorithms fall into distinct categories where using one algorithm is preferred over the other?
Some students confuse research questions with the data collection (interview) questions. They are not the same. Additionally, qualitative and quantitative research questions differ considerably. The research method identified in the purpose statement should guide how the research questions are stated.
Quantitative research questions must reflect testable relationships between variables and are followed by hypotheses that reflect the researcher’s predictions about the nature of the relationships under study. These predictions should be firmly rooted in the researcher’s understanding of the literature and theory used to frame the study.
Qualitative research questions should indicate the exploratory and open-ended nature of the inquiry. Because qualitative research studies do not involve any testing of relationships between variables, qualitative research questions are not followed by hypotheses.
Complete the ungraded quiz below to check your knowledge about developing effective research questions:
Launch in a separate window
Assignment: Develop Research Questions
Instructions
For this assignment, you must develop research questions that relate to the purpose of your study and align with the problem and purpose statements.
Your research questions should adhere to the following requirements:
Must be stated in open-ended question format
Must be one sentence in length, ending with a question mark
Must be distinct and answerable
Must align with the problem and purpose statements
Add your research questions to the end of the paper you submitted in the previous assignment. Be sure to check for alignment between all of the research components.
Length: 2-3 page paper
The completed assignment should address all of the assignment requirements, exhibit evidence of concept knowledge, and demonstrate thoughtful consideration of the content presented in the course. The writing should integrate scholarly resources, reflect academic expectations and current APA standards.
Unformatted Attachment Preview
890305
research-article2020
JHLXXX10.1177/0890334419890305Journal of Human LactationDodgson
About Research
Quality in Research: Asking
the Right Question
Journal of Human Lactation
2020, Vol. 36(1) 105–108
© The Author(s) 2020
Article reuse guidelines:
sagepub.com/journals-permissions
https://doi.org/10.1177/0890334419890305
DOI:
10.1177/0890334419890305
journals.sagepub.com/home/jhl
Joan E. Dodgson, PhD, MPH, RN, FAAN
Keywords
breastfeeding, research methodology, validity
Research is the underpinning upon which clinicians, educators, and scholars build their work. It defines our field and
frames the very way we think about it. Therefore, researchers have the responsibility to deliver quality products; we
trust them to ensure rigor in their methods and soundness in
their thinking. This column is about research questions, the
beginning of the researcher’s process. For the reader, the
question driving the researcher’s inquiry is the first place to
start when examining the quality of their work because if
the question is flawed, the quality of the methods and
soundness of the researchers’ thinking does not matter. The
research is flawed. The quality of a house is not important
if the foundation upon which it is built is flawed.
The characteristics of rigorously developed research
questions are taught in every basic research course, yet writing one is not easy. It takes practice and mentorship, which is
why learning how to conduct research takes considerable
academic time as well as hands-on experience working with
more senior researchers. The purpose of this column is not to
short cut this process in anyway, rather to describe for the
reader some problematic research questions. In reviewing
the hundreds of manuscripts that JHL receives every year, I
have come to understand just how difficult writing a research
question is and how many ways it can go wrong. I hope this
discussion will provide readers with some guidance in their
evaluation of the appropriateness of research questions and
provide novice researchers with some insights about writing
research questions.
discoveries of the past centuries have been the result of
someone looking at a situation without the assumptions that
others have made (e.g., Copernicus and Galileo), as Kuhn
(1962/2012) articulated in his seminal work The Structure of
Scientific Revolutions. It is not this sort of “wrong” question
that I am referring too in this column.
What I mean by the “wrong” question is a question that
does not add to our knowledge base. These types of questions may be uninformed by the existing literature and/or
poorly articulated. The “right” question is one that needs
answering, thus adding to our knowledge base. It is not (a) a
question that already has been so adequately addressed
within the current body of knowledge that re-researching it is
not only redundant but also irrelevant and/or (b) a question
that is constructed so ambiguously that researching it provides results that are at best confusing and at worse meaningless. Although other possible problems concerning poorly
constructed research questions exist, these are the predominant ones submitted to JHL and the ones discussed below.
Redundant and Irrelevant Research Questions
What Does Asking the Right Question
Mean?
In developing an evidence base, it is important to ask the
same question more than once, as one study does not create a
body of knowledge (Dodgson, 2017). Replication studies are
valid and important in building our knowledge by confirming the findings of others (Polit & Beck, 2017). In fact, JHL
publishes many of these types of articles every year. For
example, a phenomenon well researched in Western cultures
that has not been researched in Asian, African, or Middle
Eastern cultures can contribute valuable insights and broaden
our knowledge base. Researchers must examine a question
Asking the right question implies that there are “wrong”
questions to ask, which runs contrary to the ideal of scientific inquiry. We want to believe that any question considered “wrong” could truly be meaningful if looked at with an
open mind. Indeed, some of the greatest innovations and
Corresponding Author:
Joan E. Dodgson, PhD, MPH, RN, IBCLC, PO Box 311, Honeoye Falls,
NY 14472, USA.
Email: [email protected]
106
from a number of perspectives before a body of knowledge
can be developed. It is not these types of research questions
that I am referring to as problematic.
The problematic issue occurs when a research question
addresses aspects of the lactation field that have been
researched extensively over many years. A large body of
research already exists in an area that we have widely
accepted as “known,” for example, the importance of an adequate latch-on for milk transfer, the role that stress plays in
“let-down,” or the lack of breastfeeding knowledge leading
to inappropriate practices. Unless the researcher has done
something unique or found something new, this type of
research is redundant.
Students submit many of the manuscripts JHL receives
that have redundant and irrelevant research questions. If
the student has done their review of the existing literature
well and asked their research question/aim/objective carefully, it is possible that their study may add to the existing
body of literature. More often these studies have adequately developed research questions about something
already well established in the field, which may have been
a useful and essential student learning experience, but does
not add anything new to the existing literature. JHL does
not publish these.
It is the researcher’s responsibility to know the existing
knowledge base in their area of study well enough to understand what is known and what needs more data before the
topic is well established. Too frequently, novice researchers
or those conducting research not adequately educated in
research methods rely on a few literature reviews (secondary sources of information) to provide a background for
developing their study, instead of digesting the original
research (primary source material). This creates a shallow
understanding of what is known, one filtered through whoever wrote the literature review’s perspective. Literature
reviews may be a scholarly analysis or something much
less rigorous and are always considered secondary sources
of information, which are useful in pointing researchers in
researchable directions, but not to be relied upon beyond
that role.
It is the role of peer reviewers and editors to have a broad
and deep enough knowledge of the field to determine if the
researcher has framed a question(s) that offer readers a new
perspective or adds an important nuance to the existing body
of knowledge. In this way peer reviewers and editors are
gatekeepers, providing a check and balance for each other.
However this system does not always work effectively, leaving the reader to evaluate the researcher’s understanding of
the topic being researched. The expertise of the researcher
cannot be assumed. Therefore, readers need to examine the
references that researchers have used to determine if they are
up to date and relevant, if primary sources have been used,
and if the researcher’s description of the topic of study is
congruent with the reader’s understanding of the topic. The
reader must read critically.
Journal of Human Lactation 36(1)
Poorly Defined Research Questions
The purpose of a research question is to define what will be
studied, with enough specificity that there will be no ambiguity or confusion about exactly what variables (quantitative
research) or phenomena (qualitative research) the researcher is
seeking to study. Additionally, in quantitative studies, the variables being measured must be responsive enough to detect
change if it has occurred. Therefore, choosing each word carefully with precise attention to word choices is essential (Polit
& Beck, 2017). When this does not happen words are misused,
vague, open for interpretation, and/or inappropriately used,
there is no way a sound research study can be designed—no
matter how rigorous the methodology. The architecture of the
research (i.e., methodology) cannot compensate for building
upon a flawed foundation. This is why any evaluation of the
quality of a research study must always begin with an examination of the question being asked.
An example of a poorly defined research question occurs
in the Patel and Patel (2016) article discussed by Gutowski
and Chetwynd (2019) in a letter to the editor within this
issue. I have chosen this article because it illustrates many of
the problematic areas seen in lactation research. The methodology used by Patel and Patel (2016), a systematic review
with meta-analysis, is considered a very sophisticated and
high-level study design, in other words a very sound methodology that should yield quality data that will add to the existing knowledge base (Chertok & Haile, 2018; Polit & Beck,
2017, p. 648). The researchers conducted this study in accordance with established standards for a systematic review
with meta-analysis (Moher et al., 2009). “A basic criterion
for a meta-analysis is that the research question being
addressed across studies is strongly similar, if not identical.
This means that the independent and the dependent variables, and the study populations must be sufficiently similar
to merit integration” (Polit & Beck, 2017, p. 648).
Patel and Patel (2016) stated, “The objective of this
review was to assess if lactation education or support programs using lactation consultants or lactation counselors
would improve rates of initiation and duration of any breastfeeding and exclusive breastfeeding compared with usual
practice” (p. 530). Two main problems exist with this objective, both stemming from their use of the word or. First, it is
immediately obvious that two types of interventions (i.e.,
lactation education and support programs) are targeted,
which can be problematic in that if too wide a net is cast, the
results are a comparison of apples with oranges. Education
programs are vastly different than hands-on breastfeeding
support programs. These researchers further define the interventions included in their analysis as “stand-alone or part of
a multicomponent structured program” (p. 531). One question is whether comparing the outcomes to a stand-alone
intervention could ever be appropriately compared with the
outcomes of a multicomponent intervention; there is a body
of public health literature supporting the effectiveness of
Dodgson
multicomponent interventions when compared with standalone interventions.
Second, Patel and Patel’s (2016) objective defined who carried out the interventions in the studies they chose to include in
their meta-analysis as “lactation consultants or lactation counselors” (p. 530), which they further explained were “IBCLCs
[International Board Certified Lactation Consultants], CLCs
[Certified Lactation Counselors], lactation consultants, or lactation counselors” (p. 531). “When intervention studies are
being pooled, it is important that the intervention methods are
clearly defined with similar treatment methods, components,
and intensities” (Chertok & Haile, 2018, p. 422). As Gutowski
and Chetwynd (2019) explained and Patel and Patel (2019)
acknowledged in their response, this definition of the interventionalist encompasses multiple levels of care providers. It is too
broadly defined, leaving too many possible alternative explanations and intervening factors to determine with any validity
the meaning of their results.
The issue of broad ill-defined variables is a problem that
has plagued the quality of breastfeeding research for many
years. Lumping IBCLCs, lactation consultants (a generic
term), CLCs, and lactation counselors (a generic term) into
one single category of provider is extremely imprecise, creating a single ill-conceived category. The researchers’ goal
was to determine if anyone of a variety of interventions by
any of the mentioned providers made a difference in breastfeeding outcomes compared to usual care, which was not
further defined. In other words, is some intervention better
than no intervention? Without making the distinction
between levels of providers, this is similar to asking if care
by either a physician (MD) or a physician’s assistant (PA)
will make more of a difference in outcomes than no care. I
am sure this was an appropriate question at some point in
time in the past. In lactation, we have known for many years
that some care yields better outcomes than no care (when
looking at a populations, not specific individual cases).
A related but slightly different issue is the ambiguity
inherent in the use of the generic lactation care providers
terms, lactation consultant and lactation counselor. For many
years referring to IBCLCs as lactation consultants has been a
common practice; however, given the variety of lactation
support providers currently working in the field, this is no
longer a viable option. Authors must specifically articulate if
they are referring to IBCLCs or another type of lactation care
provider to avoid confusion and misunderstandings. JHL has
made this a policy.
Another problematic area inherent in Patel and Patel’s
(2016) research objective stems from the outcomes measured (i.e., breastfeeding initiation, any breastfeeding rates,
and exclusive breastfeeding rates). Over the years, many
researchers have not clearly defined their breastfeedingrelated outcome measures, prompting an international call
in 1990 (Labbok & Krasovec, 1990) for more accurate and
consistent definitions in breastfeeding research. It is well
established that the benefits of breastfeeding are dose
107
dependent; therefore, the exact amount of human milk consumed by an infant is a critical factor in determining outcomes. Patel and Patel (2016) do not address the issue of
how breastfeeding outcome variables were defined in each
of the reviewed studies. Labbok and Starling (2012) had
previously addressed this issue stating, “In part because of
the lack of clear or consistent definitions used in [peer
reviewed] publications, generalization and comparison of
findings have been difficult, and interpretation of findings
is often limited” (p. 397). Given the span of years and the
number of studies included in Patel and Patel’s (2016)
research, breastfeeding variables were not defined using
the same definitions across included studies.
The use of the word or in a research question always
opens up the possibility of confusion, at best. More likely
this is an imprecision that will undermine both the internal
(i.e., the inferences can be made by the researchers about the
intervention, rather than other factors) and external (the generalizability of the findings) validity of the study (Polit &
Beck, 2017, pp. 728, 731). This vagueness in both the types
of interventions and who carried out these interventions has
completely muddled the purpose of their study; therefore,
their results also are questionable.
Contrast this vagueness with the precision of this example
of a well-developed question: “This systematic review and
meta-analysis aimed to describe interventions containing
direct support by IBCLCs during the postpartum period and
to analyze the association between study characteristics and
the prevalence breastfeeding outcomes” (Chetwynd, Wasser,
& Poole, 2019, p. 424). In this question, all the components
are precise enough that definition of the interventions and
interventionalist are not ambiguous and measured outcomes
are definable variables.
The peer reviewers and the JHL editor should have caught
the validity problems within the Patel and Patel (2016) article. The fact they did not is evidence that despite our best
efforts, not all published articles have the rigor and quality
we strive to achieve. Although regrettable, it is perhaps inevitable given the 300+ manuscripts reviewed at JHL every
year. JHL is not alone in having published an article or two
like this. It happens in most journals, which ultimately leaves
to the reader the job of determining the validity of what has
been published. It is essential that readers evaluate the quality and appropriateness of research questions before using
study results.
In the case of the Patel and Patel article, wide ranging
consequences beyond questioning the findings have
occurred, as decision makers have used these results
(Gutowski & Chetwynd, 2019). Unfortunately, this article
has been one of the most cited articles JHL has published
within the past 5 years, which means the questionable results
created by a poorly constructed research objective have been
distributed and embraced as evidence, and used by decision
makers. Perhaps one reason this study has been so widely
embraced was the methodology; evidence-based medicine
108
gurus often have identified meta-analysis as the highest form
of evidence (Paul & Leibovici, 2014). In other words, the
architecture was so stellar that no one adequately examined
the foundation upon which it was placed. This illustrates the
importance of always examining the research question(s)
first and foremost. Ultimately it is the peer reviewers, the
editor, and the readers who need to approach any research
with a skeptical eye—examining both the architecture
(method and process) and the foundation (the research questions) upon which research has been built.
Declaration of Conflicting Interests
The author declared the following potential conflicts of interest
with respect to the research, authorship, and/or publication of this
article: The author is the Editor in Chief of the Journal of Human
Lactation. The Patel and Patel (2016) article was published just
after the author became the JHL Editor in Chief; it had been
reviewed and accepted for publication well before the author’s
tenure.
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
References
Chertok, I. R. A., & Haile, Z. T. (2018). Meta-analysis. Journal of
Human Lactation, 34(3), 420–423.
Chetwynd, E. M., Wasser, H. M., & Poole, C. (2019). Breastfeeding
support interventions by International Board Certified Lactation
Consultants: A systemic review and meta-analysis. Journal of
Human Lactation, 35(3), 424–440.
Journal of Human Lactation 36(1)
Dodgson, J. E. (2017). Should one study change practice?
[Editorial]. Journal of Human Lactation, 33(3), 476–477.
Gutowski, J., & Chetwynd, E. (2019). Letter to the editor: The
effectiveness of lactation consultants and lactation counselors on breastfeeding outcomes. Journal of Human Lactation,
36(1), xxx–xxx.
Kuhn, T. S. (2012). The structure of scientific revolutions. Chicago,
IL: University of Chicago Press. (Original work published
1962)
Labbok, M., & Krasovec, K. (1990). Toward consistency in
breastfeeding definitions. Studies in Family Planning, 21(4),
226–230.
Labbok, M. H., & Starling, A. (2012). Definitions of breastfeeding:
Call for the development and use of consistent definitions in
research and peer-reviewed literature. Breastfeeding Medicine,
7(6), 397–402. doi:10.1089/bfm.2012.9975
Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., & The PRISMA
Group. (2009). Preferred reporting items for systematic
reviews and meta-analyses: The PRISMA statement. Journal
of Clinical Epidemiology, 62, 1006e1012. doi:10.1016/j.
jclinepi.2009.06.005
Patel, S., & Patel, S. (2016). The effectiveness of lactation consultants and lactation counselors on breastfeeding outcomes.
Journal of Human Lactation, 32(3), 530–541.
Patel, S., & Patel, S. (2019). Reply to Gutowski and Chetwynd
[Letter to the editor]. Journal of Human Lactation, 36(1),
xxx–xxx.
Paul, M., & Leibovici, L. (2014). Systematic review or metaanalysis? Their place in the evidence hierarchy. Clinical
Microbiology and Infection, 20(2), 97–100. doi:10.1111/14690691.12489
Polit, D. F., & Beck, C. T. (2017). Nursing research: Generating
and assessing evidence for nursing practice (10th ed.).
Philadelphia, PA: Wolters Kluwer.
Sage Research Methods Video
Quantitative Research: Methods in the Social Sciences
Pub. Date: 2016
Product: Sage Research Methods Video
DOI: https://doi.org/10.4135/9781483397160
Methods: Quantitative data collection, Experimental design, Survey research
Keywords: administration, dating, depression, job satisfaction, political advertising, political ideology and
voting, practices, strategies, and tools, racial attitudes, voting
Disciplines: Anthropology, Business and Management, Criminology and Criminal Justice, Communication
and Media Studies, Counseling and Psychotherapy, Economics, Education, Geography, Health, Marketing,
Nursing, Political Science and International Relations, Psychology, Social Policy and Public Policy, Social
Work, Sociology
Access Date: December 31, 2023
Publishing Company: SAGE Publications, Inc
City: Thousand Oaks
Online ISBN: 9781483397160
© 2016 SAGE Publications, Inc All Rights Reserved.
Sage
Sage Research Methods Video
(c) SAGE Publications, Inc., 2006
[Quantitative Methods] [Table of Contents– 1. Questions of Quantitative Research 2. Principles of
Measurement 3. Experiments 4. Surveys 5. Applications 6.Conclusion] [Segment 1 Questions of
Quantitative Research]
NARRATOR: Human behavior is complex. Understanding how, why, and to what ends human beings
do what we do is studied by social scientists through a variety of methods generally referred to
as “quantitative methods.” While there are different methods specifically, they each address certain
kinds of questions and adhere to certain principles of measurement.
NARRATOR [continued]: These include questions about cause effect and mitigating effects. What is
the effect of a given cause? What is the cause of a given effect? How do we mitigate a given effect
by manipulating a given cause?
BARBARA HUMMEL ROSSI: Quantitative methods are used when you have specific questions in
mind and good measures to measure the variables in question. For example, you might be looking
at the relation between achievement and intelligence. The question might be, what is the relation
between achievement and intelligence?
BARBARA HUMMEL ROSSI [continued]: Now we have good standardized measures to measure
both intelligence and achievement and we would use correlation analysis to look at the relation between the two of them.
NARRATOR: The following example illustrates the essence of what quantitative methods seek to address in whole or part.
X: I was trying to call you Saturday and you didn’t pick up. Where were you?
Y: Oh, yeah. I was out, just out with some friend of mine.
X: Where’d you go? What’d you do?
Y: Just to a bar. I was just hanging out with a girl named Sally. Yeah.
X: Who is she?
Y: It was just kind of like a date.
X: OK. So let me just try to get this straight. You went out with her Saturday night on a date without
even telling me, without even letting me know. And you apparently like her more than you do and
now you’re breaking up with me. Well, just try for the sake of knowing things, I just want to know what
you did with her. What went on that you’re keeping from me?
Page 2 of 10
Quantitative Research: Methods in the Social Sciences
Sage
Sage Research Methods Video
(c) SAGE Publications, Inc., 2006
Y: It doesn’t matter.
X: No, to me, it matters.
Y: It doesn’t.
X: I want to know what you did with her behind my back. That’s what I want to know.
Y: It’s not about that.
NARRATOR: Those using quantitative methods to understand what happened between these two
people would want to know, what is X feeling? Did what Y said to X make her upset? If Y would have
said something more positive, would X be expressing a different emotion? [Segment 2 Principles of
Measurement]
NARRATOR [continued]: When measuring these various causes and effects, social scientists are
careful that they measure what and how things occur in the real world, not the world as it exists in
their office, laboratory, or their own brain. This includes adhering to standards of internal validity, external validity, and reliability.
NARRATOR [continued]: Internal validity is when an experiment isolates a causal connection between two variables, eliminating all other explanations. External validity is when results of a study
can be generalized to a broader population. Reliability is when a phenomenon is measured consistently
NARRATOR [continued]: in repeated studies.
BARBARA HUMMEL-ROSSI: Internal and external validity are really both critical for doing experiments, particularly the experimental control situation. Internal validity refers to, does the treatment
make a difference? And you’d be concerned about such things interfering with the treatment effect,
such things as history.
BARBARA HUMMEL-ROSSI [continued]: As a person gets older, the construct under question may
change. You would be concerned about the effects, for example, of a pretest sensitizing the individual to the intervention and the effects perhaps of differential mortality, that is, people leaving
BARBARA HUMMEL-ROSSI [continued]: the experiment differently in the control group and the experimental group. With respect to external validity, this has to do with whether or not you can generalize to other situations, for example, to another setting, to other people administering an intervention.
And they’re both very critical to experimental design.
BARBARA HUMMEL-ROSSI [continued]: [Segment 3 Experiments]
Page 3 of 10
Quantitative Research: Methods in the Social Sciences
Sage
Sage Research Methods Video
(c) SAGE Publications, Inc., 2006
NARRATOR: One of the most often-used forms of quantitative methods is the experiment.
CHARLES MCILWAIN: The primary reason that experiments are used in social science research is
because it’s the best method for isolating causal relationships between human behavior. So for instance, say I wanted to understand whether or not people’s attitudes about crime are changed by the
amount or the kind of television news
CHARLES MCILWAIN [continued]: that they watch. An experiment allows the researcher to manipulate the message, to measure the effect of people’s attitudes and opinions, and then be able to tell
whether or not the message was the actual cause of the change in their attitude or their opinion. The
one downside about using experiments
CHARLES MCILWAIN [continued]: is that it is low in what social scientists refer to as external validity.
And that simply means that an experimental environment, the researcher controls everything that’s
going on. And we know that in the real world, we don’t always know what’s going to happen. And so
though we can test for the causal relationship,
CHARLES MCILWAIN [continued]: we can’t always generalize to say that this is the way things are
likely to happen in any given scenario.
NARRATOR: The following example illustrates how a typical social science experiment might be run.
This one seeks to ascertain the effects of racial messages in political campaign advertisements.
First, the experimenter describes to subjects in the experiment what they will be doing and asks for
their voluntary consent
NARRATOR [continued]: to continue participation.
CHARLES MCILWAIN: Please sign the form and I will collect them.
NARRATOR: Second, participants are asked to watch a series of political ads in which no racial message is present.
DAVID JACKSON: What choice do you have in this election? You can choose a candidate who believes parents should choose whether children will get the best education, instead of being forced
into failing schools. Or you can choose a candidate whose education plan means simply throwing
more money at schools and teachers who aren’t getting the job done. You can choose a candidate
who believes that the way
DAVID JACKSON [continued]: to strengthen our schools is to impose the tough standards of No
Child Left Behind. Or you can choose one who rewards failing teachers and schools who don’t meet
high standards of excellence. You have a crucial choice in this election. I’m David Jackson and I want
Page 4 of 10
Quantitative Research: Methods in the Social Sciences
Sage
Sage Research Methods Video
(c) SAGE Publications, Inc., 2006
to be your choice because I’m the right choice.
NARRATOR: Third, participants are asked to fill out a brief questionnaire that asks, among other
things, how strongly they felt about each candidate and who they would most likely vote for. This
establishes a baseline to measure the effect of the messages to come. Next, the researcher repeat
steps one and two
NARRATOR [continued]: with a different group of participants. These participants then also view a
series of ads. This time, the ads have an explicit racial appeal.
JIM HERBERT: Some people have said that the difference between my opponent and me is the color
of our skin. That’s not the only difference. David Jackson’s education plan is to take money away
from folks like us to fund inner city schools that look like him. Jackson says his quota-based so-called
affirmative action in education plan is necessary to make the children in our two
JIM HERBERT [continued]: communities more equal. Jackson is a good man and we both believe in
equality. But does equality mean that it’s fair to take money from one group and give it to another just
because of the color of their skin? I’m Jim Herbert and I’m running for Congress because I believe in
an education policy that isn’t just black and white.
NARRATOR: Next, subjects are again asked to fill out a questionnaire that asks the same questions
about how they felt about each candidate and which of them they would more likely vote for. After
this, the experimenter analyzes data to see if there was a measurable difference in participants’ attitudes between those who saw ads with no racial message and those who saw ads
NARRATOR [continued]: with explicit racial messages. In this brief example, the researcher conducting the experiment will analyze the data, hoping to determine whether there is a causal link between
a person’s exposure to racial messages and their perception of and likelihood to vote for a particular
political candidate.
CHARLES MCILWAIN: Conducting this experiment allowed us to find out a variety of interesting conclusions regarding the way that racial messages affect voters. Most importantly, we found that implicit
racial messages seem to work well, in that when voters were exposed to a racial message or an implicit racial message
CHARLES MCILWAIN [continued]: by a white candidate, they tended to view that candidate more
favorably than the black opponent. However, we also found that explicit racial messages seem to
backfire on the sponsor of the message so that the white candidate who used an explicit racial message, the voters tended to view that person more negatively
CHARLES MCILWAIN [continued]: and the black opponent more positively. So we can see these two
Page 5 of 10
Quantitative Research: Methods in the Social Sciences
Sage
Sage Research Methods Video
(c) SAGE Publications, Inc., 2006
outcomes as far as how these messages affect the attitudes and beliefs of the voters about these
candidates. But remember, when we’re talking about experiments in particular, we’re interested in
causation. What is the precise cause for the attitude
CHARLES MCILWAIN [continued]: change in these voters? And in this way, we found that more than
the message itself, there was a greater predictor or causal variable for this attitude change and here,
that was political ideology. So a voter’s particular way of seeing political issues
CHARLES MCILWAIN [continued]: had a greater predictive effect or greater causal effect on their
attitude change. [Segment 4 Surveys]
NARRATOR: Surveys are another form of quantitative method used by social scientists. We are all
familiar with and probably have responded to surveys that seek to measure everything from public
opinion on political issues to our use of commercial products to worker satisfaction with their jobs. All
surveys are the same, in that
NARRATOR [continued]: they seek information that allows researchers to probe the depth and/or
breadth of human attitudes and behaviors. However, they can be administered in different ways, as
questionnaires or interviews. Surveys seek to gain quantitative data about a large number of individuals’ opinions
NARRATOR [continued]: or experiences. In questionnaires, individuals respond to written items that
ask them to self-report their attitudes and behaviors. In interviews surveys, a living person administers a survey face-to-face to individuals, allowing a researcher to clarify responses.
INTERVIEWER: Of using surveys-JACQUELINE MATTIS: In the social sciences, surveys are used as a way of providing broad descrip