Whenever we do any kind of hypothesis testing there are two kinds of errors that can possibly occur. These are known as Type 1 and Type 2 errors and are denoted as α (alpha) and β (beta) respectively.

The alpha level (also called the level of significance) is the Type 1 error and it measures the probability that we reject the null hypothesis even though it is true. It is generally expressed in the form of a percentage. It measures the chances of us making a wrong decision about the null hypothesis.

For example, an alpha level of 0.05 means that in the long run, there is a 5% risk that you reject the null hypothesis even though it is true. So if you were to conduct the statistical test, say a hundred times you would be making the wrong decision about the null hypothesis around five times.

The most common alpha level that is used is 5%. This is because we want to make the probability of Type 1 error as small as possible. If we try to reduce the value of alpha even further then the Type 2 error increases. The Type 1 and Type 2 errors are related in such a way that if we try to decrease one of them the other error increases.

Type 2 error β (beta) measures the probability that you accept the null hypothesis even though it is not true. It is generally considered more important to reduce Type 1 error compared to Type 2 error because we always want to give the benefit of the doubt to the null hypothesis.

The complement of the alpha level gives us the confidence level. For example, if alpha is 5% then we have a confidence level of 95% that the test will give us a correct decision.

It should be noted that the alpha level is not the same as the p-value. The alpha level measures the probability of making a wrong decision about the null hypothesis whereas the p-value measures the probability of getting extreme values. If the p-value is greater than the alpha then we do not reject the null hypothesis.