There are 4 possible outcomes when conducting a hypothesis test:

- We reject the null hypothesis when the alternative hypothesis is actually true.
- We do not reject the null hypothesis when the null hypothesis is actually true.
- We reject the null hypothesis when it is actually true.
- We do not reject the null hypothesis when the alternative hypothesis is actually true.

With careful thinking, it’s easy to see that the first two possibilities are CORRECT decisions (for example, in the first possibility we are rejecting the null hypothesis…telling the world we have data that shows our underlying belief is likely not true…when indeed the alternative hypothesis is correct). It is the last two possibilities, no. 3 and no. 4, that are INCORRECT decisions.

For the third choice, we would be rejecting the null hypothesis–showing we have data that leads us to believe it is incorrect–when it is actually true. Our sample data leads us to an incorrect decision. This mistake is called a **TYPE I ERROR**. For the fourth choice, we would fail to reject the null hypothesis–our sample data would actually support the value of the null hypotheis–when indeed the alternative hypothesis is actually the “true” value. This mistake is called a **TYPE II ERROR**.

Nice visuals of Types I and II errors can be found all over the Internet. One such chart comes from the suggested textbook for the course, and looks like this.

In most problems we do, we try to keep the probability of making a Type I Error, denoted by the symbol alpha *α* (YES, the same *α* from the hypothesis testing!), as small as possible, since making a Type I Error can often be more serious. In this class we will rarely, if ever, discuss Type II Errors. If you go on to take additional statistics courses, you will become familiar with Type II Errors then.