Calculate normal distribution probabilities (CDF), find z-scores, apply the empirical rule (68-95-99.7), use inverse normal to find values from percentiles, check Chebyshev's theorem for any distribution, and analyze lognormal distributions. All modes in one free calculator with instant step-by-step results.
Was this calculator helpful?
All probability computations use high-precision numerical approximations validated against standard statistical tables:
The normal distribution — also called the Gaussian distribution or bell curve — is the most important probability distribution in statistics. It describes the distribution of many natural measurements (heights, weights, IQ scores, measurement errors) and, through the Central Limit Theorem, underpins virtually all of classical statistical inference.
The normal distribution is completely defined by just two parameters: the mean μ (center of the bell) and the standard deviation σ (width of the bell). The probability density function (PDF) gives the height of the bell curve at any x value. The cumulative distribution function (CDF) gives the area under the curve to the left — the probability that X is less than or equal to any given value.
PDF: f(x) = (1 / (σ√2π)) × e^(-½((x-μ)/σ)²)
CDF: Φ(x) = P(X ≤ x) = ½[1 + erf((x-μ) / (σ√2))]
Z-score: z = (x - μ) / σ
P(a ≤ X ≤ b) = Φ((b-μ)/σ) - Φ((a-μ)/σ)
The empirical rule is one of the most useful properties of the normal distribution. It gives you an immediate sense of data spread without any calculation. It applies when data is approximately normally distributed, which covers a surprising number of real-world measurements.
Real-world applications of the empirical rule: IQ scores (mean=100, σ=15) — 68% of people score 85-115; adult male heights (mean=70 in, σ=3 in) — 95% of men are 64-76 inches tall; manufacturing tolerances — defects beyond 3σ trigger process review.
The inverse normal (also called probit function or percent point function) reverses the CDF lookup: given a probability p, find the value x such that P(X ≤ x) = p. This is used constantly in statistics for finding critical values, setting confidence interval boundaries, and establishing quality control thresholds.
invNorm(0.90) = 1.282 → 90th percentile / one-tailed 10% alpha
invNorm(0.95) = 1.645 → 95th percentile / 90% CI one-sided
invNorm(0.975) = 1.960 → 97.5th percentile / 95% CI two-sided
invNorm(0.995) = 2.576 → 99.5th percentile / 99% CI two-sided
For general X: x = μ + σ × invNorm(p)
Chebyshev's theorem is the more conservative, universally applicable version of the empirical rule. While the empirical rule applies only to normal distributions, Chebyshev's inequality works for any distribution — skewed, bimodal, or unknown. The trade-off is that its bounds are much less tight.
P(|X - μ| ≤ kσ) ≥ 1 - 1/k²
k=2: at least 75% of data within 2 std devs (vs 95.45% for normal)
k=3: at least 88.9% within 3 std devs (vs 99.73% for normal)
k=4: at least 93.75% | k=5: at least 96% | k=10: at least 99%
Chebyshev's theorem is especially useful in finance (stock returns are not normally distributed), quality control when distribution shape is unknown, and any situation where you need a guaranteed minimum bound rather than an approximation.
A lognormal distribution arises naturally when a variable is the product of many small independent factors — making it common in stock prices, income, biological measurements, and survival times. If ln(X) ~ N(μ, σ²), then X follows a lognormal distribution.
Mean = e^(μ + σ²/2)
Median = e^μ
Mode = e^(μ - σ²)
Variance = (e^(σ²) - 1) × e^(2μ + σ²)
CDF: P(X ≤ x) = Φ((ln x - μ) / σ)
| Z-score | P(Z ≤ z) CDF | P(Z ≥ z) Right tail | Use Case |
|---|---|---|---|
| -3.00 | 0.0013 | 0.9987 | Six sigma lower limit |
| -2.576 | 0.0050 | 0.9950 | 99% CI lower critical value |
| -1.960 | 0.0250 | 0.9750 | 95% CI lower critical value |
| -1.645 | 0.0500 | 0.9500 | 90% CI lower / one-tailed 5% |
| 0.000 | 0.5000 | 0.5000 | Mean — 50th percentile |
| 1.282 | 0.9000 | 0.1000 | 90th percentile |
| 1.645 | 0.9500 | 0.0500 | 95th percentile |
| 1.960 | 0.9750 | 0.0250 | 97.5th percentile (95% CI) |
| 2.576 | 0.9950 | 0.0050 | 99.5th percentile (99% CI) |
| 3.000 | 0.9987 | 0.0013 | Six sigma upper limit |
By the Central Limit Theorem, if you take samples of size n from any population with mean μ and finite variance σ², the distribution of sample means approaches normal as n increases: x̅ ~ N(μ, σ²/n). This is the foundation of all confidence intervals and one-sample hypothesis tests. For n ≥ 30, the approximation is generally excellent regardless of the population's shape.
x̅ ~ N(μ, σ²/n)
Standard Error = σ / √n
Z for sample mean = (x̅ - μ) / (σ / √n)