In experimental and observational studies, determining the optimal sample size involves several considerations. First, the desired level of statistical power—the probability of correctly detecting a real effect—must be specified, typically set at 80 % or 90 %. Second, the effect size—the magnitude of the difference or association being investigated—plays a key role: larger expected effects require fewer participants to achieve the same power, whereas smaller effects demand larger samples. Third, the variability of the measurement within the population influences sample size; greater variability necessitates more observations to achieve a given degree of precision.
Classical methods for calculating stikprøvestørrelse rely on formulas derived from probability theory and normal distribution theory, often incorporating the z‑score for confidence levels and the standard error of the mean. Alternative approaches, such as the use of t‑statistics or non‑parametric approximations, are applied when data do not meet assumptions of normality. For complex designs—including multifactorial experiments, longitudinal studies, or stratified sampling—specialised formulas or simulation techniques may be employed to account for design effects and clustering.
In practice, researchers often consult statistical software or reference tables that provide sample size recommendations for common statistical tests such as t‑tests, ANOVA, chi‑square tests, and regression models. Sensitivity analyses, which explore how results change with different sample sizes, help assess robustness and guide decisions when resources are limited.
Accurate determination of stikprøvestørrelse is essential for credible scientific research, public policy evaluation, and quality-control processes across disciplines. Properly sized samples enhance the credibility of findings, reduce the risk of type II errors, and support the generalizability of conclusions derived from a subset of a broader population.