... LIVE
📌 Find the probability that X falls below, above, or between values in a normal distribution.
Enter a valid mean.
Enter σ > 0.
Enter a valid x value.
📌 Find the x value that corresponds to a given cumulative probability (percentile).
Enter a valid mean.
Enter σ > 0.
Enter p between 0 and 1 (e.g. 0.95 for 95th percentile) Enter p between 0.0001 and 0.9999.
📌 Apply the 68-95-99.7 rule to find what ranges contain what % of normally distributed data.
Enter a valid mean.
Enter σ > 0.
📌 Chebyshev's theorem works for ANY distribution — not just normal. Find minimum % of data within k std devs.
Enter a valid mean.
Enter σ > 0.
k must be > 1 Enter k > 1.
📌 For lognormal distributions: enter mu and sigma of ln(X) — the underlying normal parameters.
Enter a valid log-mean.
Enter σ > 0.
Enter x > 0.
📌 Find probability for a sample mean using the Central Limit Theorem. For n ≥ 30, x̅ ~ N(μ, σ²/n).
Enter a valid mean.
Enter σ > 0.
Enter n ≥ 1.
Enter a valid sample mean.
Probability
--
⚠️ Disclaimer: Results are for educational and informational purposes. Verify critical statistical calculations with a qualified statistician.

📚 Sources & Methodology

All probability computations use high-precision numerical approximations validated against standard statistical tables:

Normal Distribution — Complete Guide with All Formulas

The normal distribution — also called the Gaussian distribution or bell curve — is the most important probability distribution in statistics. It describes the distribution of many natural measurements (heights, weights, IQ scores, measurement errors) and, through the Central Limit Theorem, underpins virtually all of classical statistical inference.

The Normal Distribution Formula (PDF and CDF)

The normal distribution is completely defined by just two parameters: the mean μ (center of the bell) and the standard deviation σ (width of the bell). The probability density function (PDF) gives the height of the bell curve at any x value. The cumulative distribution function (CDF) gives the area under the curve to the left — the probability that X is less than or equal to any given value.

Normal Distribution Formulas
PDF: f(x) = (1 / (σ√2π)) × e^(-½((x-μ)/σ)²) CDF: Φ(x) = P(X ≤ x) = ½[1 + erf((x-μ) / (σ√2))] Z-score: z = (x - μ) / σ P(a ≤ X ≤ b) = Φ((b-μ)/σ) - Φ((a-μ)/σ)

The Empirical Rule — 68-95-99.7 Rule Explained

The empirical rule is one of the most useful properties of the normal distribution. It gives you an immediate sense of data spread without any calculation. It applies when data is approximately normally distributed, which covers a surprising number of real-world measurements.

68.27%
μ ± 1σ
Within 1 std dev
95.45%
μ ± 2σ
Within 2 std devs
99.73%
μ ± 3σ
Within 3 std devs

Real-world applications of the empirical rule: IQ scores (mean=100, σ=15) — 68% of people score 85-115; adult male heights (mean=70 in, σ=3 in) — 95% of men are 64-76 inches tall; manufacturing tolerances — defects beyond 3σ trigger process review.

💡
Exact vs approximate: The empirical rule uses round numbers (68%, 95%, 99.7%). The exact values are 68.27%, 95.45%, and 99.73%. For 95% exactly, use μ ± 1.96σ, not μ ± 2σ.

Inverse Normal Distribution — Finding Values from Percentiles

The inverse normal (also called probit function or percent point function) reverses the CDF lookup: given a probability p, find the value x such that P(X ≤ x) = p. This is used constantly in statistics for finding critical values, setting confidence interval boundaries, and establishing quality control thresholds.

Common Inverse Normal Critical Values
invNorm(0.90) = 1.282 → 90th percentile / one-tailed 10% alpha invNorm(0.95) = 1.645 → 95th percentile / 90% CI one-sided invNorm(0.975) = 1.960 → 97.5th percentile / 95% CI two-sided invNorm(0.995) = 2.576 → 99.5th percentile / 99% CI two-sided For general X: x = μ + σ × invNorm(p)

Chebyshev's Theorem — For Any Distribution

Chebyshev's theorem is the more conservative, universally applicable version of the empirical rule. While the empirical rule applies only to normal distributions, Chebyshev's inequality works for any distribution — skewed, bimodal, or unknown. The trade-off is that its bounds are much less tight.

Chebyshev's Theorem
P(|X - μ| ≤ kσ) ≥ 1 - 1/k² k=2: at least 75% of data within 2 std devs (vs 95.45% for normal) k=3: at least 88.9% within 3 std devs (vs 99.73% for normal) k=4: at least 93.75% | k=5: at least 96% | k=10: at least 99%

Chebyshev's theorem is especially useful in finance (stock returns are not normally distributed), quality control when distribution shape is unknown, and any situation where you need a guaranteed minimum bound rather than an approximation.

Lognormal Distribution — When to Use It

A lognormal distribution arises naturally when a variable is the product of many small independent factors — making it common in stock prices, income, biological measurements, and survival times. If ln(X) ~ N(μ, σ²), then X follows a lognormal distribution.

Lognormal Distribution Parameters
Mean = e^(μ + σ²/2) Median = e^μ Mode = e^(μ - σ²) Variance = (e^(σ²) - 1) × e^(2μ + σ²) CDF: P(X ≤ x) = Φ((ln x - μ) / σ)

Normal Distribution Z-Score Table Reference

Z-scoreP(Z ≤ z) CDFP(Z ≥ z) Right tailUse Case
-3.000.00130.9987Six sigma lower limit
-2.5760.00500.995099% CI lower critical value
-1.9600.02500.975095% CI lower critical value
-1.6450.05000.950090% CI lower / one-tailed 5%
0.0000.50000.5000Mean — 50th percentile
1.2820.90000.100090th percentile
1.6450.95000.050095th percentile
1.9600.97500.025097.5th percentile (95% CI)
2.5760.99500.005099.5th percentile (99% CI)
3.0000.99870.0013Six sigma upper limit

Sampling Distribution of the Mean

By the Central Limit Theorem, if you take samples of size n from any population with mean μ and finite variance σ², the distribution of sample means approaches normal as n increases: x̅ ~ N(μ, σ²/n). This is the foundation of all confidence intervals and one-sample hypothesis tests. For n ≥ 30, the approximation is generally excellent regardless of the population's shape.

Sampling Distribution — Normal Probability for x̅
x̅ ~ N(μ, σ²/n) Standard Error = σ / √n Z for sample mean = (x̅ - μ) / (σ / √n)

❓ Frequently Asked Questions

PDF: f(x) = (1/(σ√2π)) × e^(-½((x-μ)/σ)²). CDF: Φ(x) = P(X ≤ x). The normal distribution is fully specified by its mean μ (center of bell curve) and standard deviation σ (width). The standard normal has μ=0 and σ=1.
For a normal distribution: 68.27% of data falls within ±1σ of the mean, 95.45% within ±2σ, and 99.73% within ±3σ. The exact 95% range is μ ± 1.96σ, not μ ± 2σ. This rule only applies to normally distributed data — use Chebyshev's theorem for unknown distributions.
Use the inverse normal function: z = invNorm(p). Common values: p=0.95 gives z=1.645 (95th percentile), p=0.975 gives z=1.96 (used for 95% two-sided CI), p=0.995 gives z=2.576 (used for 99% CI). For a general normal distribution: x = μ + σ × invNorm(p).
Chebyshev's theorem states that for ANY distribution, at least 1-1/k² of data falls within k standard deviations of the mean. k=2: at least 75%. k=3: at least 88.9%. It is less precise than the empirical rule but works for skewed, bimodal, or unknown distributions where normality cannot be assumed.
PDF (probability density function) is the bell curve height — it represents relative likelihood but is not a probability directly. CDF is the area under the bell curve to the left of a value — it gives P(X ≤ x) directly. For probability calculations, you always use the CDF, not the PDF.
A lognormal distribution is one where ln(X) is normally distributed. Lognormal is used for: stock prices (Black-Scholes model), income distributions, city sizes, biological growth rates, and anything where values are strictly positive and vary by multiplicative factors. The mean of a lognormal is e^(μ+σ²/2) — always larger than the median e^μ.
P(a ≤ X ≤ b) = Φ((b-μ)/σ) - Φ((a-μ)/σ). Convert both bounds to z-scores, then subtract the left-tail CDF values. Our calculator handles this automatically with the "between" mode. Example: P(60 ≤ X ≤ 80) where μ=70, σ=10 = Φ(1) - Φ(-1) = 0.8413 - 0.1587 = 0.6827 = 68.27%.
By the Central Limit Theorem, sample means are normally distributed: x̅ ~ N(μ, σ²/n). The z-score for a sample mean is z = (x̅ - μ) / (σ/√n). Use this when finding the probability that a sample mean exceeds some threshold. For n ≥ 30, the approximation is excellent even for non-normal populations.
IQ is designed to be N(100, 15). P(IQ > 130) = P(z > (130-100)/15) = P(z > 2) = 2.28%. P(IQ between 85-115) = 68.27% (within 1 std dev). P(IQ above 145) = P(z > 3) = 0.13%. Mensa requires IQ in the top 2% = IQ ≥ 130.8 (z = 2.054).
The standard normal distribution is N(0, 1) — mean=0, standard deviation=1. Any normal variable X can be standardized: z = (X - μ)/σ. This lets you use a single z-table for any normal distribution. The CDF of the standard normal is denoted Φ(z).
When n*p ≥ 5 and n*(1-p) ≥ 5, the binomial B(n,p) can be approximated by N(np, np(1-p)). Apply continuity correction: P(X = k) ≈ P(k-0.5 < Y < k+0.5). This approximation is used when n is large and exact binomial calculations are computationally intensive.
For k=2: Chebyshev guarantees at least 75% vs empirical rule's 95.45% for normal data. For k=3: 88.9% vs 99.73%. The empirical rule gives tighter, more informative bounds but only for normal distributions. Chebyshev gives conservative lower bounds for any distribution. When data shape is unknown, always prefer Chebyshev's theorem.

🔗 Related Statistics Calculators

🔥 Popular Calculators

🧮

Missing a Statistics Calculator?

Can't find the statistical tool you need? Request it and we'll build it.