Distributions differ in measures of their central tendency and variability. When conducting or reading about research, we will find ourselves working with distributions that are indeed different, yet we will be required to compare them with one another. And to do such a comparison, we need some kind of a standard. One such kind of standardized score is a Z score.

The Z score can be calculated using the formula, Z = \frac{x-\mu}{\sigma} where, x denotes the data value, \mu denotes the mean and \sigma denotes the standard deviation of the distribution.We now list some of the advantages and disadvantages of using Z scores in statistical applications.

**Advantages of Z Scores:**

- The Z scores tells us how far away the raw data value is from the mean, in units of standard deviations. For example, a Z score of 2 means the the value is at a distance of 2 standard deviations to the right of the mean. A Z score of -1 means that the value is at a distance of 1 standard deviation to the left of the mean.
- Since the Z scores are standardised scores they allow us to make comparisons between two different data sets that may have different means and standard deviations. They help to level the playing field in comparing performances from one group to another. For example, suppose that two students give two different tests and obtain absolute scores of 200 and 300 marks respectively. This is not helpful for comparison since we do not know how the students performed compared to the other exam takers. The level of difficulty of the two exams might also not be the same. But suppose that we are told that their Z scores are +2 and +3 respectively. Then we can conclude that the second student showed a better performance.
- The Z scores serve as the test statistic when conducting many of the usual tests of hypothesis such as the Z test for equality of means. We can also easily compute the p value corresponding to the Z score by looking at the standard Z probability table.
- The Z scores allows us to calculate the probability and relative position of a particular value since we already understand the standard normal distribution. For example, suppose that we are given data about the exam scores of a 100 students in terms of Z scores. If a particular student has a Z score of +2 we can conclude that the student is in the top 5% among the 100 students.

**Disadvantages of Z Scores:**

- We cannot compute or assign meaning to Z scores for nominal or ordinal type of data.
- The original data values cannot be recovered from the Z score unless we know the mean and the standard deviation of the distribution.
- Although the Z score takes into account the mean and standard deviation of the distribution, it does not take into account the skewness and kurtosis of the distribution. If the distribution is not symmetric then using the Z scores can lead to erroneous conclusions.
- The computation of Z scores is based on the assumption that the data is roughly approximated by the normal distribution. The assumption of normality is not true under all circumstances.