# Independent samples t-test with R

In science, business, or marketing research, it is common to compare two or more groups. For instance, we want to assess whether men or women spend more money on home security systems, we want to know if people in rural areas eat more fatty acids than people from urban areas, and finally, we might want to assess whether vegans are happier than carnivores. All these questions can be addressed using an independent t-test, otherwise known as Student’s T test. For further analysis, we need two things: a hypothesis and t-test assumptions.

Let’s start with the hypothesis. In general, hypotheses for group comparison are as follows:

Null hypothesis: There is no difference between two compared groups.

Alternative hypothesis: There is a difference between two groups.

Let’s assume we conducted research on relationship quality. We asked 80 people who are in steady relationships to complete questionnaires on relationship quality. Exactly 40 people were married and 40 people were not married, resulting in two groups (married vs. non-married). For this research, our hypotheses will look like this:

Null hypothesis: There is no difference in mean relationship quality between married and non-married people.

Alternative hypothesis: There is a difference in mean relationship quality between married and non-married people.

Note that I use the word “mean” in both hypotheses, this will be important in further analysis.

Normal distributions

Now let’s move on to the assumptions. To conduct an independent t-test we need to meet a normality assumption and a variance homogeneity assumption.

The normality assumption, or the requirement for data to be normally distributed, is a fundamental assumption for an independent t-test, as well as for other parametric statistical tests (such as ANOVA or regression-type analysis).

Since parametric tests are based on mean values, violation of the normality assumption can make inferences flawed, if not impossible, and extremely easy to negate. Therefore, the first thing to do is to check whether both distributions (for married and non-married people) are normal.

If we meet this assumption, we can move forward to the next step.

Normality can be assessed using several methods, with the most popular being statistical tests like Shapiro-Wilk and Kolmogorov-Smirnov. However, there are a few less popular methods, such as Lilliefors test, Quantile-Quantile Plot, or a simple histogram.

Homogeneity of variance

Homogeneity of variance, or relative equality of variances, is an assumption that is, to some degree, related to the size of our groups. If groups differ in number of observations, there is a great chance that the variances are not equal. In terms of the t-test, homogeneity of variance must be met; otherwise, it is very likely to reject the null hypothesis when it should be accepted.

Homogeneity of variance is usually assessed with Fisher test or Levene’s test.

Let’s take a look at our example on relationship quality. There are three variables in our dataset: a) participant ID, b) relationship (married (1) or non-married (2)), and c) relationship score. Our dataset is named ‘data’.

Before we start, we need to add labels to our factor variable, to make the work easier

Firstly, we want to verify the normality assumption. We are going to take a look at histograms. To do this, we need a package called ggplot2, which helps us to make histograms for both groups at once.

If histograms are not helpful or we cannot be sure whether distributions are normal enough, we can apply additional test, such as the Shapiro-Wilk test. We will use an additional package called dplyr, to group the results by marriage status.

This code tells R to use dataset called ‘data’. We want our test to be performed in both groups, therefore we indicate that we group (group_by) our results by relationship status. Finally, we use the summarise('p value' = shapiro.test(rel_score)\$p.value) function to print only the p-value for the Shapiro-Wilk test.
This sign %>% is called the pipe operator and is used with the dplyr package. It is useful to organize codes and operate on large datasets.

This code tells R to use dataset called ‘data’. We want our test to be performed in both groups, therefore we indicate that we group (group_by) our results by relationship status. Finally, we use the summarise('p value' = shapiro.test(rel_score)\$p.value) function to print only the p-value for the Shapiro-Wilk test.
This sign %>% is called the pipe operator and is used with the dplyr package. It is useful to organize data and operate on large datasets.

The p-value helps us verify the hypotheses related to normality. Our null hypothesis states that there is no difference between normal distribution and the distribution of our group. Therefore, if p-value is greater than the alpha level (usually 0.05), the null hypothesis is not rejected, indicating that our distributions are normal.
The first assumption is met; now we can move on to the next one which is the homogeneity of variance. We want to use Levene’s test, so we need to load a package called car and call the leveneTest function.

Levene’s test null hypothesis states that there is no difference between the variances in our groups. Therefore, we also want our p-value to be greater than the alpha level, to have no reason to reject this hypothesis. In our case p-value [Pr(>F) value here] is greater than alpha = 0.05, indicating that we can trust that homogeneity of variance is met.

If both assumptions are met, we can conduct a t-test to compare our groups and answer the research question. The t-test is a basic R function, so we do not need any additional packages. Since we analyzed the homogeneity of variance beforehand, we can set var.equality to be TRUE.

We look at the test statistic, which is equal to -1.51. Our degrees of freedom are equal to 78, and the p-value is 0.135. Therefore, p-value is greater than the alpha level (0.05). This means that the null hypothesis (Null hypothesis: there is no difference between two compared groups) should be accepted. In other words, we infer that there is no statistically significant difference between married and non-married people, when it comes to the relationship quality.

However, oftentimes, our assumptions are not met. Either variables have skewed distributions, or variances are not equal, or both. To have a clue what to do in a case when the t-test assumptions are not met, read the next post about Mann-Whitney U test.        