No menu items!

Chebyshev’s Inequality/Formula (with Examples)

-

Chebyshev’s inequality is an inequality that tells us the probability of a random variable lying a certain number of standard deviations away from the mean. This theorem was discovered by the Russian mathematician Chebyshev in 1853 and later rediscovered by Bienayme.

Chebyshevs Formula:

The formal mathematical form of the inequality is,

Chebyshevs Formula
Chebyshev’s Inequality

The inequality can also be alternatively stated in the opposite form as,

\LARGE P(|X-\mu|>k\sigma)\leq \frac{1}{k^2}

Examples of Chebyshev’s Inequality:

1. If we put k=3 in the first inequality above then we get,

P(|X-\mu|<3\sigma) \geq 1-(1/9) that is,

P(|X-\mu|<3\sigma) \geq 0.89

This means that 89% of the data values in a distribution lie within 3 standard deviations away from the mean.

2. Similarly if we put k=2 in the above inequality we get,

P(|X-\mu|<2\sigma) \geq 1-(1/4) that is,

P(|X-\mu|<2\sigma) \geq 0.75

This means that 75% of the data values in a distribution lie within 2 standard deviations away from the mean.

Uses of Chebyshev’s Formula:

  1. Chebyshev’s inequality can be used to obtain upper bounds on the variance of a random variable.
  2. The inequality can also be used to prove the weak law of large numbers. The weak law of large numbers tells us that if we draw samples of increasing size from a population then the sample mean tends towards the population mean.
  3. As seen above the Chebyshev Formula gives us results similar to the empirical rule. It tells us what percentage of the data lies within certain distance from the mean. The difference is that the empirical rule is only applicable to normal distributions whereas the Chebyshev formula is applicable for every distribution. Since the empirical rule applies only to a specific case it is much more sharper and accurate when compared to Chebyshevs inequality.

Chebyshev Inequality Proof:

The Chebyshev inequality can be proved by elementary probabilistic arguments as shown below. We first assume that the variable is a continuous random variable. We begin with the definition of the variance of a random variable.

Chebyshev Inequality Proof Part 1
Chebyshev Inequality Proof Part 2

If the random variable is discrete then the Chebyshev inequality follows in the same way as above except for the fact that integration is replaced by summation everywhere in the above proof.

One-Sided Chebyshev Inequality:

Suppose now that we are interested in the fraction of data values that exceed the sample mean by at least k sample standard deviations. We assume that k is a positive integer. In this case, we use the one-sided Chebyshev inequality which states that,

\Large P(X-\mu>k\sigma)\leq \frac{1}{1+k^2}

Notice that this is different from the usual Chebyshev inequality because the usual Chebyshev inequality tells us the fraction of data values to the left and right of the sample mean that differ from it by at least k sample standard deviations. The one-sided inequality gives us only those fraction of values that are to the right of the sample mean and that differ from it by at least k sample standard deviations.

Improvements to the Chebyshev’s Inequality:

We can improve upon Chebyshev’s inequality if we assume the existence of higher moments up to the fourth-order. Suppose that X is a random variable with mean 0 and variance \sigma^2. We then have that,

\Large P(|X|>k\sigma)\leq \frac{\mu_4 - \sigma^4}{\mu_4 + \sigma^4k^4-2k^2\sigma^4}

where, k is greater than 1 and \mu_4 denotes the fourth central moment.

Hey 👋

I'm currently pursuing a Ph.D. in Maths. Prior to this, I completed my master's in Maths & bachelors in Statistics.

I created this website for explaining maths and statistics concepts in the simplest possible manner.

If you've found value from reading my content, feel free to support me in even the smallest way you can.



Share this article

Recent posts

Popular categories

Previous articleCluster Sampling
Next articleCovariance

Recent comments