<<

One-way ANOVA

Analysis of variance (ANOVA) is a collection of statistical models used to analyze the differences between group means, developed by R.A. Fisher (Fisher, 1925). In ANOVA, the observed variance in a particular variable, usually an outcome variable, is partitioned into components attributable to different sources of variation: typically the between-group variation and the within-group variation. Simply speaking, ANOVA provides a statistical test of whether or not the means of several groups are equal, and therefore generalizes the t-test to more than two groups.

One-way ANOVA

One-way ANOVA typically evaluates whether there is difference in means of three or more groups, although it also works for two-group analysis. In other words, it investigates the effect a grouping variable on the outcome variable. 

In the ACTIVE data, we have 4 groups: one control group and three training groups - memory, reasoning, and speed training groups. As an example, we test whether there is any difference in the four groups in terms of memory performance measured by the Hopkins Verbal Learning Test, denoted by the hvltt2 variable in the data set. Then, one-way ANOVA can be used.

Specifically for this example, the null and alternative hypotheses

\[ H_{0}:\:\mu_{1}=\mu_{2}=\mu_{3}=\mu_{4}\]

\[H_{a}:\;H_{0}\,\mbox{is not true or at least two groups have different means.}\]

For the hypothesis testing, an F-test is used. The basic idea is to decompose the total variation of the outcome variance into the variation because of the difference between groups and the variation within groups. The variation is measured by the sum of squares. For one-way ANOVA, we have \[ SST = SSB + SSW \]

Then the F-statistic is \[ F=\frac{SSB/df_B}{SSW/df_W} = \frac{MSB}{MSW}\]

If the null hypothesis is true, the F-statistic should follow an F distribution with degrees of freedom $df_B$ and $df_W$ under certain condition. The R code below conducts the one-way ANOVA for the ACTIVE data.

> usedata('active')
> attach(active)
> 
> model<-lm(hvltt2~factor(group))
> anova(model)
Analysis of Variance Table

Response: hvltt2
                Df Sum Sq Mean Sq F value    Pr(>F)    
factor(group)    3    859 286.203  10.051 1.466e-06 ***
Residuals     1571  44736  28.476                      
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
> 

Post-hoc multiple comparison

In the one-way ANOVA, we have found at least two groups are different in memory performance. We can further investigate which two groups are different. In R, the function pairwise.t.test() can be used to conduct a t-test for any two groups to see if they are significantly different. For example, the code below shows how to conduct such analysis.

> usedata('active')
> attach(active)
> 
> pairwise.t.test(hvltt2, group, p.adj = "none")

	Pairwise comparisons using t tests with pooled SD 

data:  hvltt2 and group 

  1       2      3     
2 2.9e-05 -      -     
3 0.0014  0.2995 -     
4 2.8e-07 0.3375 0.0445

P value adjustment method: none 
> 

From the output, we might "naively" conclude that group 1 is statistically different from groups 2, 3, and 4 as well as group 3 different from group 4. However, in doing so, we run into the multiple comparison problem -- when conducting a group of more than one hypothesis testing simultaneously, hypothesis tests are more likely to incorrectly reject the null hypothesis. To demonstrate this, consider two hypotheses:

\[ H_{01}: \text{group 1 = group 2} \] and \[ H_{02}: \text{group 3 = group 4}. \] 

If for each individual hypothesis, the type I error rate is controlled at 0.05, meaning there is 5% probability to reject the hypothesis even it is true. Then, the type I error rate to reject either or both of the two hypotheses is $1 - (1-.05)*(1-0.05) = 0.0975$, which is about two times of the nominal level 0.05. This indicates to control the type I error rate, one should use a smaller significance level. Such a significance level can be calculated. Suppose we have $k$ simultaneous hypotheses to test and we want to keep the overall type I error rate to be 0.05. Let $\alpha^*$ is the significance level to use. Then we have \[ 1-(1-\alpha^*)^k = 0.05. \] Solving this equation leads to \[ \alpha^*= 1 - 0.95^{1/k} \approx 0.05/k. \] For the one-way ANOVA post-hoc comparison, we conducted a total of 6 hypotheses tests. Therefore, the corrected significance level should be $0.05/6 = 0.008$. Therefore, only a p-value smaller than this can be considered significant. In this case, the difference between group 3 and group 4 is not statistically significant anymore. Another way to solve the problem is to multiply the obtained p-value by the number of tests and then compare it with 0.05 all the time. The way to adjust the p-value is called Bonferroni correction (Dunn, 1961). This can be done easily in R as shown below.

> usedata('active')
> attach(active)
> 
> pairwise.t.test(hvltt2, group, p.adj = "bonf")

	Pairwise comparisons using t tests with pooled SD 

data:  hvltt2 and group 

  1       2       3      
2 0.00017 -       -      
3 0.00828 1.00000 -      
4 1.7e-06 1.00000 0.26720

P value adjustment method: bonferroni 
> 

Barplot with standard error or confidence interval

With a fairly large sample size, it can be useful to make a barplot to show the group information. The height of each bar is proportional to the mean of each group. In addition, the standard error or the confidence interval for the mean of each group can be added to the bars. Such a plot can be generated using the R package ggplot2. To generate a such a plot, we need a data frame with the following information: the grouping variable, the mean of each group and the standard error. For the ACTIVE data, the information is given in the following table.

group mean se
1 26.1 0.27
2 24.5 0.27
3 24.9 0.25
4 24.1 0.29

The R code below calculates the information in the table and generates a bar plot as shown in the output. We now explain each line of the R code.

> library(Rmisc)
Loading required package: lattice
Loading required package: plyr
> library(ggplot2)
> 
> usedata('active')
> stats <- summarySE(active, measurevar="hvltt2", groupvars=c("group"))
> stats
  group   N   hvltt2       sd        se        ci
1     1 392 26.10204 5.293416 0.2673579 0.5256389
2     2 387 24.49871 5.307817 0.2698115 0.5304842
3     3 407 24.89189 5.116197 0.2536005 0.4985340
4     4 389 24.13111 5.625401 0.2852192 0.5607685
> 
> ggplot(stats, aes(x=factor(group), y=hvltt2)) + 
+     geom_bar(stat="identity", fill="blue", width=0.8) +
+     geom_errorbar(aes(ymin=hvltt2-se, ymax=hvltt2+se),
+                   width=.2, colour='red') +
+     xlab("Groups") +
+     ylab("Memory test score") +
+     scale_x_discrete(breaks=c("1", "2", "3", "4"),
+         labels=c("Memory", "Reasoning", "Speed", "Control"))
> 
R plot

To generate the plot, the ggplot() function from the ggplot2 package is used.