PSY325 ANOVA Interpretation Exercise

Description

Prior to beginning work on this assignment, read the scenario and ANOVA results provided in an announcement by your instructor and the Analysis of Variance (ANOVA) and Non-Normal Data: Is ANOVA Still a Valid Option? articles, review the required chapters of the Tanner textbook and the Jarman e-book, and watch the One-Way ANOVA: Against All Odds – Inside Statistics video. In your paper, identify the research question and the hypothesis being tested in the assigned scenario. Consider the following questions: What are the independent and dependent variables, sample size, treatments, etcetera? What type of ANOVA was used in this scenario? What do the results mean in statistical and practical terms?

Don't use plagiarized sources. Get Your Custom Assignment on
PSY325 ANOVA Interpretation Exercise
From as Little as $13/Page

In your paper,

Determine what question(s) the researchers are trying to answer by doing this research.
Determine the hypotheses being tested. Is the alternative hypothesis directional or nondirectional?
Identify the independent variable(s), the dependent variable, and the specific type of ANOVA used.
Determine the sample size and the number of groups from information given in the ANOVA table.
Discuss briefly the assumptions and limitations that apply to ANOVA.
Interpret the ANOVA results in terms of statistical significance and in relation to the research question.

The ANOVA Interpretation Exercise assignment

Must be two to three double-spaced pages in length (not including title and references pages) and formatted according to APA Style


Unformatted Attachment Preview

Analysis of Variance (ANOVA)
In: The SAGE Encyclopedia of Communication Research
Methods
By: Kevin L. Blankenship
Edited by: Mike Allen
Book Title: The SAGE Encyclopedia of Communication Research Methods
Chapter Title: “Analysis of Variance (ANOVA)”
Pub. Date: 2018
Access Date: November 9, 2021
Publishing Company: SAGE Publications, Inc
City: Thousand Oaks
Print ISBN: 9781483381435
Online ISBN: 9781483381411
DOI: https://dx.doi.org/10.4135/9781483381411
Print pages: 34-36
© 2017 SAGE Publications, Inc All Rights Reserved.
This PDF has been generated from SAGE Research Methods. Please note that the pagination of the
online version will vary from the pagination of the print book.
SAGE
SAGE Research Methods
2017 SAGE Publications, Ltd. All Rights Reserved.
In many social science disciplines such as communication and media studies, researchers wish to compare
group averages on a dependent variable across different levels of an independent variable. Analysis of
variance (ANOVA) is a collection of inferential statistical tests belonging to the general linear model (GLM)
family that examine whether two or more levels (e.g., conditions) of a categorical independent variable have
an influence on a dependent variable. As with most inferential tests, the purpose of ANOVA is to test the
likelihood that the results observed are due to change differences between the groups.
This entry provides a general overview of ANOVA, including a discussion of the assumptions underlying the
tests, comparison with t-tests, different forms of ANOVA, and provides two examples of ANOVA designs.
Assumptions Underlying ANOVA Tests
ANOVA belongs to the family of parametric inferential tests; therefore, a number of requirements related to
the variables and the population of interest must be met or else assumptions underlying the mathematical
properties will be violated. One requirement is that independent variables should be measured on a nominal
scale, such that the variables’ conditions vary qualitatively (e.g., presence or absence of a variable) but
not quantitatively. Another requirement is that the variance associated with the populations from which the
independent variable was sampled should be equal (i.e., homogeneity of variance). Dependent variables
should be quantifiable on at least an interval scale (e.g., extent of agreement with a statement; amount of
satisfaction with a relationship) and should also be normally distributed. These assumptions also highlight
ANOVA’s roots in experimental design, where researchers have a significant amount of control in the
manipulation and measure of the variables of interests. However, it is possible to utilize ANOVA in quasiexperimental and some correlational designs where random assignment of participants to conditions is too
costly or not possible (e.g., gender). Violation of these assumptions can affect the interpretation of the results
from these tests; under these conditions nonparametric tests might better serve the researcher.
Comparison to t-Test
To better understand and appreciate the utility of ANOVA tests, it might be useful to compare it to t-test
types of analyses. Both tests share similar assumptions and are applicable for designs where different
levels of a condition can be independent (i.e., participants are exposed to only one level of the independent
variable) or dependent (i.e., participants are exposed to all levels of the independent variable). And just like
t-tests, ANOVA partitions out the variability attributed to the difference between the independent variable
(i.e., difference between groups or treatment variance) and variability in the research context (e.g., naturally
occurring variability or error variance). However, two important differences between t-tests and F tests derived
from ANOVA are that ANOVA can accommodate research designs that utilize (a) more than two levels of an
independent variable and (b) multiple independent variables.
Types of ANOVA Designs
Page 2 of 6
Analysis of Variance (ANOVA)
SAGE
SAGE Research Methods
2017 SAGE Publications, Ltd. All Rights Reserved.
One-Way ANOVA
To illustrate the different types of variables in a research context, consider the following example involving
message source effects in persuasion—common variables of interest in communication research. Imagine
that a researcher is interested in the effect of source credibility on the persuasiveness of a message
advocating the reduction of greenhouse gases. In this case, credibility is the independent variable and
persuasion is the dependent variable. The researcher hypothesizes that participants exposed to the message
from the high-credibility source will be more persuaded than those who are exposed to the same message
from either the low-credibility source or where no source information is mentioned. To test this, three
conditions are created: a high-credibility condition, a low-credibility condition, and a control condition where
credibility-related information is absent. Following exposure to the independent variable and the message,
participants report their opinion toward the reduction of greenhouse gases.
Examination of participants’ opinions will likely yield that the amount of persuasion will vary across
participants: some may be very persuaded, some very little, and some moderately so. ANOVA-based tests
will partition that variability into two general types: variability attributed to the independent variable (i.e.,
treatment variance, credibility) and variability that is left over in the experimental context. This latter form of
variability is called error variance, and can originate from many different aspects in the experimental setting
not under control of the experimenter, such as characteristics the participants bring into the context (e.g.,
personality) and the context itself (e.g., room temperature). These other sources of variability are combined
into error variance, which is essentially all of the variability in a dependent variable not attributed to the
independent variable.
The value calculated by the ANOVA test, the F value, is the ratio of these two types of variability (hence the
term analysis of variance). Specifically, an F value results from dividing the treatment variance by the error
variance. The larger the value, the smaller the likelihood that chance played a role in the differences observed
between levels of the independent variable. In traditional null hypothesis testing terms, a significant F value
is one that has a less than 5% probability of occurring by chance (p < .05), given that there is no difference (i.e., the null hypothesis is true). That is, the difference between levels of the independent variable is due to the independent variable and not to random chance. When it comes to hypothesis testing in ANOVA, it is a bit more complicated than a t-test when a study contains three or more groups because the design is more complex. Using the credibility and persuasion example, suppose the researcher obtains a significant F value such that the null hypothesis is rejected (i.e., the observed differences between the group means is not likely due to chance). The F value does not tell you which groups differ from each other; it merely indicates that at least two groups are different from each other. This is referred to as the omnibus F test, as it tests whether a combination of any two groups is different from each other. Thus, obtaining a significant F test does not mean that the researcher’s hypothesis is supported, just that the null hypothesis is rejected. The number of possible outcomes where the null hypothesis is rejected varies as a function of the number of conditions the independent variable has. In the present example with three levels of credibility, three general outcomes are possible when the null Page 3 of 6 Analysis of Variance (ANOVA) SAGE SAGE Research Methods 2017 SAGE Publications, Ltd. All Rights Reserved. hypothesis is rejected. In other words, it could be that (a) persuasion scores differ between the high- and lowcredibility conditions and the control condition is not different from the high-credibility condition, (b) persuasion scores differ between the high- and low-credibility conditions and the control condition is not different from the low-credibility condition, or (c) the three conditions are different from each other. Thus, rejection of the null hypothesis from an omnibus test sets the stage for follow-up tests that test the possibility while also controlling for Type I error (i.e., incorrectly rejecting a null hypothesis). Of course, a priori hypotheses can guide which comparisons are warranted. However, when there are no a priori expectations in terms of group differences, a number of post-hoc multiple comparison tests are possible. Common follow-up tests include Shefee’s test, Tukey’s test, and Bonferroni test, and are available in most popular statistical packages. Factorial ANOVA Designs As mentioned earlier, ANOVA can accommodate the use of multiple independent variables in a research design. This is particularly important because social communication processes and behavior are more complex than can be represented on one-way ANOVA designs. In factorial designs, effects of two or more independent variables and their interaction can be examined within the same mathematical model. Each independent variable is a factor in the design. This can be useful when testing whether the effect of one independent variable on the dependent variable is influenced by (i.e., moderated) another independent variable. Using the credibility and persuasion example earlier, suppose a researcher was also interested in whether the amount of initial knowledge message recipients have about greenhouse gases may also affect persuasion. With two independent variables, now there are six conditions (i.e., three levels of the credibility variable and two levels of participant knowledge about greenhouse gasses). By convention, factorial designs are described in terms of the nature of the independent variables in the study: the number of independent variables, the number of conditions per independent variable, and whether the conditions of the independent variables are independent or dependent. In the current example, the design would be described as a 2 × 3 between-participants factorial design. The number of dimensions indicates the number of independent variables (A × B is 2, A × B × C would be 3, etc.) and the actual value (the numeral 2 or 3) represents the number of conditions (or levels) for each independent variable. In a factorial ANOVA, partitioning of the treatment variance yields two general types of effects tested in the analysis. The first are called main effects, which refer to the influence of each independent variable independent of the other. In other words, the main effect of an independent variable represents what the study would have tested if it contained only that independent variable and left out the other independent variable. In the current example, a researcher could test for a main effect of the credibility variable and a main effect of the knowledge variable. Because the variability attributed to the one independent variable is collapsed and distributed into the other, the tests are viewed as independent and each test will have its own F value, thus not inflating Type I error. Again, this speaks to the utility of factorial designs because it is akin to conducting two separate one-way ANOVAs. The second type of effect is called an interaction, which tests the combination of the independent variables on the dependent variable, for which an F value will be calculated. Interaction effects occur when two or more independent variables combine to produce outcome over and beyond the Page 4 of 6 Analysis of Variance (ANOVA) SAGE SAGE Research Methods 2017 SAGE Publications, Ltd. All Rights Reserved. main effects. Due to the nature of interactions, they are only possible in factorial designs where two or more independent variables are present. Researchers can test relatively complex yet meaningful hypotheses that involve main effects and interactions within the same study design. A common use for factorial designs is to test moderator hypotheses. Moderator variables are those that change the effect of another variable on a dependent variable. For example, a researcher may hypothesize that the effect of credibility on persuasion will be greater for participants low rather than high in knowledge. In the current example, the interaction would test whether the difference in credibility on persuasion is different in the low versus high knowledge conditions (i.e., a difference of differences). In other words, knowledge may moderate the effect of the credibility variable on persuasion. Factorial designs thus provide a researcher the ability to test complex hypotheses without inflating Type I error and provides an additional test that is not possible with one-way ANOVA. Because the different types of effects are independent, they can be interpreted independently of each other. That is, statistically significant main effects can be interpreted without a significant interaction present, and vice versa. However, because interactions are considered higher-order effects relative to main effects, a significant interaction is considered a more meaningful effect to interpret than any main effects that are also significant. The presence of a significant interaction is similar to that of a significant one-way ANOVA: there is a statistically significant difference between two or more conditions, but it is unclear where that difference lies. Thus, a researcher faced with a significant interaction would conduct a series of follow-up tests. Notable Types of Factorial Deigns Three general forms of factorial designs exist, depending on the nature of the independent variables. Between-participants factorial designs consist of independent variables, where participants are exposed to only one condition of the design. The credibility and knowledge example provided earlier is a betweenparticipants factorial. In within-participants designs, however, participants are exposed to all conditions of the study. Finally, mixed designs are those that contain at least one independent variable that is withinparticipants and at least one that is between-participants. A common use for such a design in communication and journalism involves measuring the dependent variable twice: before a manipulation and following exposure to the manipulation. For example, a researcher interested in how source credibility affects attitude change may have participants report their initial attitudes, then again following exposure to the credibility manipulation. Because participants report their attitudes both before and after the between-subjects manipulation, the design can help control for individual differences in initial attitudes toward the topic of interest. Final Note on Factorial Designs Although there are no limits to the number of independent variables a researcher may include in a factorial design, most typically involve two or three factors. One reason is that the more variables included, the more difficult the interpretation of the data can be. Another reason is that as the number of conditions increases, Page 5 of 6 Analysis of Variance (ANOVA) SAGE SAGE Research Methods 2017 SAGE Publications, Ltd. All Rights Reserved. the greater is the number of resources required to implement that design. For example, the knowledge and credibility factorial design contains six conditions (2 × 3 = 6), but adding another independent variable with two conditions (such as participant gender) inflates the conditions to 12 (2 × 3 × 2 = 12). Such a design may be impractical in terms of resources and not theoretically meaningful. Kevin L. Blankenship See also Experiments and Experimental Design; Quasi-Experimental Design; t-Test; Variables, Dependent; Variables, Independent Further Readings Kerlinger, F. (1999). Foundations of behavioral research (4th ed.). Belmont, CA: Wadsworth. Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. New York, NY: Houghton Mifflin. Stevens, J. (1990). Intermediate statistics: A modern approach. Hillsdale, NJ: Erlbaum. Kevin L. Blankenship http://dx.doi.org/10.4135/9781483381411.n15 10.4135/9781483381411.n15 Page 6 of 6 Analysis of Variance (ANOVA) ANOVA Interpretation Set 1 Study this scenario and ANOVA table, then answer the questions in the assignment instructions. A researcher wants to compare the efficacy of three different techniques for memorizing information. They are repetition, imagery, and mnemonics. The researcher randomly assigns participants to one of the techniques. Each group is instructed in their assigned memory technique and given a document to memorize within a set time period. Later, a test about the document is given to all participants. The scores are collected and analyzed using a one-way ANOVA. Here is the ANOVA table with the results: Source Between Within Total SS 114.3111 121.6 235.9111 df 2 42 44 MS 57.1556 2.8952 F 19.74 p Purchase answer to see full attachment