Limiting distributions (also called asymptotic distributions) refers to the probability distributions to which a sequence of random variable converge as the sample size grows larger and larger.

**Use of limiting distributions**:

Limiting distributions are useful because it happens often that we do not know the sampling distribution of random variables such as the sample mean. But, because of theorems like the Central limit theorem, it is known that the sample mean tends to follow the normal distribution as the sample size grows larger and larger.

So if we are given that the sample size is sufficiently large then we can assume that the sampling distribution of the mean is approximately normal and use it to make conclusions. For example, we can use it to calculate confidence intervals for the population mean.

**Formal Mathematical Definition**:

The formal definition for the concept of limiting distribution is as follows,

And we say that F(x) is the limiting or asymptotic distribution for a sequence of random variables {X_{n}}.

Note that the cdf’s F(X_{n}) need not converge to F(x) at every point x. They only need to converge at those points where F(x) is continuous.

A well-known example of limiting distribution is the T distribution tending toward the normal distribution. It is well known that as the degrees of freedom of the T distribution grow larger and larger, the pdf more and more closely approximates the normal distribution

**How do you find the limiting distribution?**

- As we sat earlier, one way to find the limiting distribution is to simply take the limit of the sequence of cdf’s and identify the limiting distribution.
- Another method to find the limiting distribution is to use the moment generating function(mgf). If the moment generating function of the sequence of random variables tends to the mgf of a known distribution then we conclude that that distribution is the asymptotic distribution.

**Limiting distributions in Markov chains**:

The concept of limiting distribution in Markov chains refers to steady-state probability. In a Markov chain, after the passage of a large amount of time, the probabilities stabilize, and the resulting state is known as the steady-state.