... LIVE
📊 Statistics Guide

The Complete Statistics Calculations Guide 2026

Every formula, worked example, and free calculator for descriptive statistics, probability, distributions, hypothesis testing, regression, confidence intervals, and 200+ more statistical calculations — all in one place. From mean and standard deviation to z-scores, p-values, ANOVA, and Bayesian inference.

Verified: American Statistical Association & NIST/SEMATECH e-Handbook of Statistical Methods 2026
200+ Free Calculators
12 Topic Clusters
100% Formula Verified
2026 Updated
Descriptive Statistics Probability Distributions Hypothesis Testing Confidence Intervals Regression & Correlation Combinatorics Sampling & Sample Size Data Visualization Odds & Betting Advanced Statistical Tests Ordering & Ranking

📋 Table of Contents

📊
Descriptive Statistics Calculators

Summarize and describe the key characteristics of a dataset — the foundation of all statistical analysis.

Measures of Central Tendency & Spread

Mean, Median, and Mode

The three measures of central tendency each describe the "center" of a dataset differently. The mean is the arithmetic average — sum all values and divide by count. The median is the middle value when data is sorted, making it resistant to outliers. The mode is the most frequently occurring value, most useful for categorical or discrete data.

Central Tendency Formulas
Mean (x̅) = (x1 + x2 + ... + xn) / n Median = middle value if n is odd; average of two middle values if n is even Mode = value(s) that appear most frequently in the dataset Midrange = (Max + Min) / 2

Variance and Standard Deviation

Variance and standard deviation quantify how spread out data is around the mean. Use sample formulas (divide by n-1) when your data is a subset of a larger population — this applies to most real-world scenarios. Use population formulas (divide by N) only when you have every member of the population.

Variance & Standard Deviation
Sample variance (s²) = ∑(xi - x̅)² / (n - 1) Population variance (σ²) = ∑(xi - μ)² / N Standard deviation = √variance Relative Standard Deviation (RSD%) = (s / x̅) x 100

The Five-Number Summary and IQR

The five-number summary — minimum, Q1, median, Q3, maximum — gives a complete picture of a distribution's shape and spread. The interquartile range (IQR = Q3 - Q1) measures the spread of the middle 50% and is used to detect outliers via Tukey's fences.

IQR and Outlier Fences
IQR = Q3 - Q1 Lower fence = Q1 - 1.5 x IQR Upper fence = Q3 + 1.5 x IQR Outlier: any value below lower fence or above upper fence

Coefficient of Variation and Skewness

The coefficient of variation (CV) expresses standard deviation as a percentage of the mean, enabling comparisons of spread across datasets with different units or scales. Skewness measures asymmetry: a positive value means a right tail (mean > median), negative means a left tail.

Skewness & Coefficient of Variation
CV = (standard deviation / mean) x 100% Pearson's skewness = 3(mean - median) / standard deviation Mean absolute deviation (MAD) = ∑|xi - x̅| / n Sum of squares (SS) = ∑(xi - x̅)²
💡
Rule of thumb: For skewed data or data with outliers, always report the median rather than the mean. The median income is reported by governments for exactly this reason — a few billionaires pull the mean up dramatically while the median reflects the typical person.

Range, Dispersion, and MSE

The range (max - min) is the simplest spread measure but highly sensitive to outliers. Mean squared error (MSE) measures prediction accuracy in models. Standard error of the mean (SEM) estimates how precisely your sample mean estimates the population mean.

MeasureFormulaUse Case
RangeMax - MinQuick spread estimate
IQRQ3 - Q1Robust spread, outlier detection
Std Dev (s)√(SS / n-1)Typical spread from mean
CVs / x̅ x 100%Comparing different-scale datasets
MAD∑|xi - x̅| / nRobust alternative to std dev
MSE∑(actual - predicted)² / nModel accuracy
SEMs / √nPrecision of sample mean
Descriptive Statistics Calculator Mean, median, mode, variance, standard deviation, IQR, skewness, and kurtosis for any dataset. Calculate now → Mean Median Mode Calculator Find all three measures of central tendency with step-by-step working shown. Calculate now → Standard Deviation Calculator Sample and population standard deviation with variance and sum of squares breakdown. Calculate now → Variance Calculator Sample and population variance with full calculation breakdown for any dataset. Calculate now → IQR Calculator Interquartile range with Q1, Q2, Q3 quartiles and Tukey outlier fence detection. Calculate now → 5-Number Summary Calculator Min, Q1, Median, Q3, Max — complete five-number summary for any data list. Calculate now → Outlier Calculator Identify outliers using IQR fences and z-score methods with full dataset analysis. Calculate now → Mean Absolute Deviation Calculator Calculate MAD from mean and MAD from median with step-by-step breakdown. Calculate now → Coefficient of Variation Calculator CV as a percentage to compare relative variability between datasets of different scales. Calculate now → Skewness Calculator Calculate Pearson's and Fisher's skewness coefficients with distribution shape analysis. Calculate now → Sum of Squares Calculator Total, regression, and residual sum of squares — essential for ANOVA and regression analysis. Calculate now → MSE Calculator Mean squared error to evaluate prediction accuracy in regression and forecasting models. Calculate now →
🎲
Probability Calculators

Calculate probabilities for single events, combined events, conditional outcomes, and Bayesian updates.

Probability — Rules, Formulas, and Applications

Basic Probability Rules

Probability measures the likelihood of an event occurring, expressed as a value between 0 (impossible) and 1 (certain). The fundamental rules govern how probabilities combine across multiple events.

Probability Formulas
P(A) = favorable outcomes / total outcomes P(A AND B) = P(A) x P(B) — if A and B are independent P(A OR B) = P(A) + P(B) - P(A AND B) — addition rule P(A | B) = P(A AND B) / P(B) — conditional probability P(not A) = 1 - P(A) — complement rule

Bayes' Theorem

Bayes' theorem updates a prior probability given new evidence. It is the mathematical foundation of Bayesian statistics and is used in medical testing, spam filtering, and machine learning. The key insight: the probability of a hypothesis after observing evidence depends on how likely the evidence was under each hypothesis.

Bayes' Theorem
P(A | B) = P(B | A) x P(A) / P(B) Expanded: P(A | B) = P(B | A) x P(A) / [P(B|A)P(A) + P(B|not A)P(not A)] Post-test probability = (Sensitivity x Prevalence) / P(Positive test)

False Positives and Diagnostic Testing

In medical testing, the positive predictive value (PPV) — the probability a positive test is a true positive — depends heavily on disease prevalence. A test with 99% sensitivity and 99% specificity still has only a 50% PPV when disease prevalence is 1%. This is the false positive paradox.

Probability ConceptFormulaApplication
Joint probabilityP(A ∩ B)Both events occur
ConditionalP(A | B) = P(A∩B)/P(B)A given B occurred
Implied probability1 / decimal oddsBetting markets
Expected value∑(P(xi) x xi)Average outcome over many trials
SensitivityTP / (TP + FN)True positive rate
SpecificityTN / (TN + FP)True negative rate
PPVTP / (TP + FP)Precision of positive result
🔔
Probability Distribution Calculators

Normal, binomial, Poisson, t, chi-square, exponential, and all major distributions — PDF, CDF, and inverse calculations.

Probability Distributions — When to Use Each

The Normal Distribution

The normal distribution is the most important distribution in statistics due to the Central Limit Theorem: sample means are normally distributed for large enough samples, regardless of the underlying population's shape. Defined by mean μ and standard deviation σ, the normal curve is symmetric and bell-shaped.

Normal Distribution & Z-Score
Z-score = (X - μ) / σ Empirical rule: μ ± 1σ = 68.3% | μ ± 2σ = 95.4% | μ ± 3σ = 99.7% Standard normal: N(0, 1) — mean=0, std dev=1 P(a < X < b) = Φ(b) - Φ(a) using z-table or CDF

Binomial Distribution

Use binomial when counting successes in a fixed number of independent trials, each with the same probability of success. Examples: number of heads in 10 coin flips, number of defective items in a batch.

Binomial Distribution
P(X = k) = C(n,k) x p^k x (1-p)^(n-k) Mean = np | Variance = np(1-p) where n = trials, k = successes, p = probability per trial

Poisson Distribution

Use Poisson when counting events occurring in a fixed interval of time or space, where events occur independently at a constant average rate. Examples: calls per hour, defects per square meter.

Poisson Distribution
P(X = k) = (λ^k x e^-λ) / k! Mean = λ | Variance = λ where λ = average rate (events per interval)

Choosing the Right Distribution

DistributionUse WhenKey Parameter
NormalContinuous, symmetric, bell-shaped dataμ, σ
BinomialFixed trials, success/failure outcomesn, p
PoissonCount of rare events in intervalλ
t-distributionSmall samples (<30), unknown σdf
Chi-squareGoodness of fit, independence testsdf
ExponentialTime between events (Poisson process)λ
UniformEqual probability over a rangea, b
GeometricTrials until first successp
BetaModeling probabilities (Bayesian)α, β
WeibullReliability, failure time analysisk, λ
Normal Distribution Calculator PDF, CDF, and inverse normal for any mean and standard deviation with shaded area diagrams. Calculate now → Z-Score Calculator Convert raw scores to z-scores and find corresponding probabilities and percentiles. Calculate now → Binomial Distribution Calculator P(X = k), P(X ≤ k), and P(X ≥ k) for any n and p with full probability table. Calculate now → Binomial Probability Calculator Exact and cumulative binomial probabilities for experiments with two outcomes. Calculate now → Poisson Distribution Calculator Probability of k events in an interval given average rate λ — with CDF and PMF tables. Calculate now → Probability Distribution Calculator Normal, binomial, Poisson, t, chi-square, and uniform distributions in one tool. Calculate now → Central Limit Theorem Calculator Sampling distribution of the mean — apply CLT to find probabilities for sample means. Use now → Inverse Normal Distribution Calculator Find the x-value corresponding to any given probability in a normal distribution. Calculate now → Exponential Distribution Calculator PDF and CDF for exponential distribution — time between events in a Poisson process. Calculate now → Geometric Distribution Calculator Probability of first success on the k-th trial with mean and variance calculations. Calculate now → Uniform Distribution Calculator PDF, CDF, mean, variance, and probabilities for continuous uniform distributions. Calculate now → Normal Approximation Calculator Approximate binomial and Poisson distributions with a normal curve using continuity correction. Calculate now →
🧪
Hypothesis Testing Calculators

Z-tests, t-tests, chi-square tests, ANOVA, and non-parametric tests — with p-values, critical values, and power analysis.

Hypothesis Testing — The Complete Framework

The Hypothesis Testing Process

Every hypothesis test follows the same five steps: (1) State null (H0) and alternative (H1) hypotheses. (2) Choose significance level α (typically 0.05). (3) Collect data and calculate the test statistic. (4) Find the p-value or compare to critical value. (5) Reject H0 if p < α or test statistic exceeds critical value.

Z-Test vs. T-Test

Use a z-test when the population standard deviation is known or when sample size n > 30. Use a t-test when the population standard deviation is unknown and the sample is small. The t-distribution has heavier tails than the normal, accounting for additional uncertainty from estimating σ.

Test Statistics
Z-statistic = (x̅ - μ0) / (σ / √n) T-statistic = (x̅ - μ0) / (s / √n) with df = n - 1 Two-sample t: t = (x̅1 - x̅2) / √(s²1/n1 + s²2/n2) Chi-square: χ² = ∑(Observed - Expected)² / Expected F-statistic (ANOVA) = variance between groups / variance within groups

Type I and Type II Errors

A Type I error (false positive, rate = α) occurs when you reject a true null hypothesis. A Type II error (false negative, rate = β) occurs when you fail to reject a false null. Statistical power = 1 - β is the probability of correctly detecting a real effect. Power of 0.80 (80%) is the conventional minimum standard for research.

TestUse WhenAssumptions
One-sample z-testCompare sample mean to known μ, large nKnown σ, normal data
One-sample t-testCompare sample mean to known μ, small nUnknown σ, approx. normal
Two-sample t-testCompare two group meansIndependent samples
Paired t-testBefore/after or matched pairsPaired observations
Chi-square χ²Test independence or goodness of fitCategorical data, expected >5
ANOVA (F-test)Compare 3+ group meansNormal, equal variance
Mann-Whitney UCompare two groups, non-normalNon-parametric
Wilcoxon signed-rankPaired non-normal dataNon-parametric
P-Value Calculator Convert z-scores, t-scores, chi-square, or F statistics to one-tailed and two-tailed p-values. Calculate now → Z-Score to P-Value Calculator Instantly convert any z-score to its corresponding one-tailed or two-tailed p-value. Calculate now → P-Value to Z-Score Calculator Reverse lookup: find the z-score that produces any target p-value. Calculate now → T-Test Calculator One-sample, two-sample independent, and paired t-test with p-value and confidence interval. Calculate now → T-Statistic Calculator Calculate the t-statistic for one-sample, two-sample, and paired comparison tests. Calculate now → Critical Value Calculator Z, t, chi-square, and F critical values for any significance level and degrees of freedom. Calculate now → Chi-Square Calculator Goodness of fit and independence tests with chi-square statistic, p-value, and interpretation. Calculate now → Chi-Square to P-Value Calculator Convert chi-square test statistic to p-value for any degrees of freedom. Calculate now → ANOVA Calculator One-way ANOVA with F-statistic, p-value, SS, MS, and group comparison results. Calculate now → F-Statistic Calculator Calculate the F-statistic for ANOVA or variance ratio tests with corresponding p-value. Calculate now → P-Value Significance Calculator Determine statistical significance at any alpha level with effect size interpretation. Calculate now → Hypothesis Testing Calculator Full hypothesis test workflow: select test type, enter data, get test statistic and decision. Calculate now →
🎯
Confidence Interval Calculators

Construct 90%, 95%, and 99% confidence intervals for means, proportions, and differences.

Confidence Intervals — Construction and Interpretation

What a Confidence Interval Actually Means

A 95% confidence interval does not mean "there is a 95% chance the true parameter is in this interval." The correct interpretation: if you repeated the study many times and constructed a CI each time, 95% of those intervals would contain the true population parameter. For any single interval, the true value is either in it or it is not.

Confidence Interval Formulas
CI for mean (large n): x̅ ± z* x (σ / √n) CI for mean (small n): x̅ ± t* x (s / √n), df = n-1 CI for proportion: p̂ ± z* x √(p̂(1-p̂)/n) Margin of error E = z* x (s / √n) Critical values: 90% CI z*=1.645 | 95% CI z*=1.960 | 99% CI z*=2.576
💡
Width trade-off: Wider confidence intervals are more likely to contain the true parameter but are less precise. To halve the margin of error, you must quadruple the sample size (since E is proportional to 1/√n).
📈
Regression & Correlation Calculators

Linear, polynomial, exponential, and logistic regression — plus Pearson, Spearman, and Kendall correlation coefficients.

Regression Analysis — Fitting Models to Data

Simple Linear Regression

Linear regression finds the best-fit line y = a + bx by minimizing the sum of squared residuals (ordinary least squares). The slope b tells you how much y changes per unit increase in x. R² (coefficient of determination) measures the proportion of variance in y explained by x.

Linear Regression Formulas
Slope: b = [n∑(xy) - ∑x∑y] / [n∑x² - (∑x)²] Intercept: a = y̅ - b x̅ Pearson r = ∑[(xi - x̅)(yi - y̅)] / [(n-1) sx sy] R² = r² = 1 - (SS_residual / SS_total) Residual = Observed y - Predicted y

Correlation Interpretation

|r| ValueStrengthNotes
0.0 – 0.1NegligibleNo meaningful linear relationship
0.1 – 0.3WeakSmall effect; may be meaningful in large samples
0.3 – 0.5ModerateNoticeable relationship
0.5 – 0.7StrongClear predictive relationship
0.7 – 0.9Very strongHigh predictability
0.9 – 1.0Near perfectAlmost deterministic relationship
🃏
Combinatorics & Counting Calculators

Permutations, combinations, factorials, and counting principles — with and without repetition.

Counting Rules — Permutations and Combinations

Permutations vs. Combinations

The critical distinction: permutations count arrangements where order matters (e.g., ranking 3 people from 10). Combinations count selections where order does not matter (e.g., choosing 3 people from 10 for a committee). The number of combinations is always less than or equal to the number of permutations for the same r and n.

Permutation & Combination Formulas
Permutation P(n,r) = n! / (n - r)! Combination C(n,r) = n! / [r! x (n - r)!] Permutation with repetition = n^r Combination with repetition = C(n+r-1, r) Password combinations (no repeat): P(n, r) where n = character set, r = length
💡
Memory trick: Combinations = Choose (order doesn't matter, C for Choose). Permutations = Position (order matters, P for Position). "The committee of 5" = combination. "The top 3 finishers" = permutation.
👥
Sampling & Sample Size Calculators

Determine the right sample size, calculate standard error, and analyze sampling distributions.

Sample Size Planning — How Many Participants Do You Need?

Sample Size for Estimating a Mean

Sample size depends on three factors: the desired margin of error E, the confidence level (which determines z*), and the population standard deviation σ. A smaller margin of error requires a much larger sample — halving the margin of error quadruples the required n.

Sample Size Formulas
For a mean: n = (z* x σ / E)² For a proportion: n = z*² x p(1-p) / E² Worst-case proportion (p=0.5): n = z*² x 0.25 / E² At 95% confidence, 5% margin of error: n = 1.96² x 0.25 / 0.05² = 385 Finite population correction: n_adj = n / (1 + (n-1)/N)

Power Analysis

Power analysis determines the sample size needed to detect an effect of a given size. Cohen's d is the standardized effect size for t-tests: d = (μ1 - μ2) / σ. Small effect d=0.2, medium d=0.5, large d=0.8. With power = 0.80, α = 0.05, and medium effect (d=0.5), a two-sample t-test requires approximately 64 participants per group.

📉
Data Visualization Calculators

Frequency distributions, histograms, box plots, stem-and-leaf plots, and percentile tools for exploring data.

🎰
Odds & Betting Calculators

Convert between decimal, fractional, and moneyline odds — plus dice, coin flip, and roulette probability tools.

Odds Formats and Implied Probability

Betting odds are expressed in three formats. Decimal odds (2.50) are the payout multiplier including your stake. Fractional odds (3/2) show profit relative to stake. Moneyline (American) odds (+150 or -200) show how much you win from $100 or must bet to win $100. All three encode the same information — implied probability.

Odds to Implied Probability
Decimal odds: Implied probability = 1 / decimal odds Fractional odds (a/b): Implied probability = b / (a + b) Moneyline (+): Implied probability = 100 / (odds + 100) Moneyline (-): Implied probability = |odds| / (|odds| + 100) Vig (overround) = sum of all implied probabilities - 1
🔬
Advanced Statistical Test Calculators

Non-parametric tests, Bayesian methods, A/B testing, epidemiology statistics, and process capability tools.

🔢
Ordering & Ranking Calculators

Sort and rank numbers, decimals, and datasets — percentile ranks, ascending/descending order tools.

📚 Sources & Methodology

All formulas, rules, and reference values on this guide are sourced from authoritative statistical references:

❓ Frequently Asked Questions

For a sample: s = √[∑(xi - x̅)² / (n-1)]. For a population: σ = √[∑(xi - μ)² / N]. Use the sample formula (n-1 denominator) for any dataset that is a sample drawn from a larger population — which covers most real-world scenarios. The n-1 denominator is Bessel's correction, which removes bias in the sample variance estimate.
Z-score = (X - mean) / standard deviation. For example, if a test has mean 70 and std dev 10, a score of 85 gives z = (85-70)/10 = 1.5. A z-score of 1.5 means the score is 1.5 standard deviations above the mean. Z-scores let you look up probabilities in the standard normal table and compare values across different distributions.
A p-value is the probability of obtaining results as extreme as the observed data, assuming the null hypothesis is true. It is NOT the probability that the null hypothesis is true. If p < 0.05, the result is conventionally considered statistically significant and you reject the null. Lower p-values indicate stronger evidence against the null, but statistical significance does not equal practical importance.
For a 95% CI for a mean: x̅ ± 1.96 x (s / √n) for large samples. Use t* instead of 1.96 for small samples. For proportions: p̂ ± 1.96 x √(p̂(1-p̂)/n). The key: wider intervals come from higher confidence levels, larger standard deviations, or smaller samples. To halve the margin of error, quadruple the sample size.
Mean = sum / count (affected by every value including outliers). Median = middle value when sorted (resistant to outliers, use for skewed data like income or house prices). Mode = most frequent value (used for categorical data). For symmetric distributions, all three are approximately equal. For right-skewed data: mean > median > mode.
Sample variance s² = ∑(xi - x̅)² / (n-1). Steps: (1) Calculate the mean. (2) Subtract the mean from each value and square the result. (3) Sum all squared differences. (4) Divide by n-1 for sample variance, or N for population variance. Variance is standard deviation squared and has squared units, which is why standard deviation is more commonly reported.
IQR = Q3 - Q1. Sort the data, split it at the median, then find the median of each half: Q1 = median of lower half, Q3 = median of upper half. Outlier fences: any value below Q1 - 1.5*IQR or above Q3 + 1.5*IQR is a potential outlier. IQR is preferred over range when outliers are present because it measures only the middle 50% of the data.
Linear regression fits the line y = a + bx by minimizing the sum of squared residuals (vertical distances from each data point to the line). Slope b = [n∑xy - (∑x)(∑y)] / [n∑x² - (∑x)²]. Intercept a = y̅ - b*x̅. R² ranges 0-1 and measures the proportion of variance in y explained by x. R² = 0.75 means 75% of the variation in y is explained by the model.
The normal distribution is a symmetric bell-shaped curve defined entirely by its mean and standard deviation. The empirical (68-95-99.7) rule states: 68.3% of data falls within ±1 standard deviation, 95.4% within ±2, and 99.7% within ±3 of the mean. The Central Limit Theorem states that sample means follow a normal distribution for large enough n (typically n≥30), regardless of the population's shape.
Pearson r ranges from -1 to +1. The sign shows direction (positive = both variables increase together; negative = one increases as the other decreases). The magnitude shows strength: 0.1-0.3 = weak, 0.3-0.5 = moderate, 0.5-0.7 = strong, above 0.7 = very strong. Crucially, correlation does not imply causation. Always check for confounding variables and consider whether the relationship is truly linear.
Type I error (alpha, false positive): rejecting a true null hypothesis. Probability = α (your significance level). Type II error (beta, false negative): failing to reject a false null hypothesis. Statistical power = 1 - β = probability of correctly detecting a real effect. Reducing alpha (e.g., from 0.05 to 0.01) makes Type I errors less likely but Type II errors more likely — you need a larger sample size to maintain adequate power.
For a mean: n = (z* x σ / E)² where E is margin of error. For a proportion: n = z*² x p(1-p) / E². Use p=0.5 for the most conservative estimate. At 95% confidence and 5% margin of error: n = (1.96)² x 0.25 / (0.05)² = 385. For power analysis: use Cohen's d and set power at 80% (β=0.20). A medium effect (d=0.5) two-sample t-test at 80% power requires about 64 per group.

🔗 Related Calculators from Other Categories

🔥 Popular Calculators Across CalculatorCove

🧮

Missing a Statistics Calculator?

We have 200+ statistics calculators tracked and more being built weekly. Can't find the one you need? Request it below and we'll prioritize it.