Movatterモバイル変換


[0]ホーム

URL:


Jump to content
WikipediaThe Free Encyclopedia
Search

Sampling distribution

From Wikipedia, the free encyclopedia
Probability distribution of the possible sample outcomes

Instatistics, asampling distribution orfinite-sample distribution is theprobability distribution of a givenrandom-sample-basedstatistic. For an arbitrarily large number of samples where each sample, involving multiple observations (data points), is separately used to compute one value of a statistic (for example, thesample mean or samplevariance) per sample, the sampling distribution is the probability distribution of the values that the statistic takes on. In many contexts, only one sample (i.e., a set of observations) is observed, but the sampling distribution can be found theoretically.

Sampling distributions are important in statistics because they provide a major simplification en route tostatistical inference. More specifically, they allow analytical considerations to be based on the probability distribution of a statistic, rather than on thejoint probability distribution of all the individual sample values.

Introduction

[edit]

Thesampling distribution of a statistic is thedistribution of that statistic, considered as arandom variable, when derived from arandom sample of sizen{\displaystyle n}. It may be considered as the distribution of the statistic forall possible samples from the same population of a given sample size. The sampling distribution depends on the underlyingdistribution of the population, the statistic being considered, the sampling procedure employed, and the sample size used. There is often considerable interest in whether the sampling distribution can be approximated by anasymptotic distribution, which corresponds to the limiting case either as the number of random samples of finite size, taken from an infinite population and used to produce the distribution, tends to infinity, or when just one equally-infinite-size "sample" is taken of that same population.

For example, consider anormal population with meanμ{\displaystyle \mu } and varianceσ2{\displaystyle \sigma ^{2}}. Assume we repeatedly take samples of a given size from this population and calculate thearithmetic meanx¯{\displaystyle {\bar {x}}} for each sample – this statistic is called thesample mean. The distribution of these means, or averages, is called the "sampling distribution of the sample mean". This distribution is normalN(μ,σ2/n){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2}/n)} (n is the sample size) since the underlying population is normal, although sampling distributions may be close to normal even when the population distribution is not (seecentral limit theorem). An alternative to the sample mean is the samplemedian. When calculated from the same population, it has a different sampling distribution to that of the mean and is generally not normal (but it may be close for large sample sizes).

The mean of a sample from a population having a normal distribution is an example of a simple statistic taken from one of the simpleststatistical populations. For other statistics and other populations the formulas are more complicated, and often they do not exist inclosed-form. In such cases the sampling distributions may be approximated throughMonte-Carlo simulations,[1]bootstrap methods, orasymptotic distribution theory.

Standard error

[edit]

Thestandard deviation of the sampling distribution of astatistic is referred to as thestandard error of the statistic. For the case where the statistic is the sample mean, and samples are uncorrelated, the standard error is:σx¯=σn{\displaystyle \sigma _{\bar {x}}={\frac {\sigma }{\sqrt {n}}}}whereσ{\displaystyle \sigma } is the standard deviation of the population distribution of that quantity andn{\displaystyle n} is the sample size (number of items in the sample).

An important implication of this formula is that the sample size must be quadrupled (multiplied by 4) to achieve half (1/2) the measurement error. When designing statistical studies where cost is a factor, this may have a role in understanding cost–benefit tradeoffs.

For the case where the statistic is the sample total, and samples are uncorrelated, the standard error is:σΣx=σn{\displaystyle \sigma _{\Sigma x}=\sigma {\sqrt {n}}}where, again,σ{\displaystyle \sigma } is the standard deviation of the population distribution of that quantity andn{\displaystyle n} is the sample size (number of items in the sample).

Examples

[edit]
Sampling distribution of the sample mean of normally distributed random numbers. With increasing sample size, the sampling distribution becomes more and more centralized.
PopulationStatisticSampling distribution
Normal:N(μ,σ2){\displaystyle {\mathcal {N}}(\mu ,\sigma ^{2})}Sample meanX¯{\displaystyle {\bar {X}}} from samples of sizenX¯N(μ,σ2n){\displaystyle {\bar {X}}\sim {\mathcal {N}}{\Big (}\mu ,\,{\frac {\sigma ^{2}}{n}}{\Big )}}.

If the standard deviationσ{\displaystyle \sigma } is not known, one can considerT=(X¯μ)nS{\displaystyle T=\left({\bar {X}}-\mu \right){\frac {\sqrt {n}}{S}}}, which follows theStudent's t-distribution withν=n1{\displaystyle \nu =n-1} degrees of freedom. HereS2{\displaystyle S^{2}} is the sample variance, andT{\displaystyle T} is apivotal quantity, whose distribution does not depend onσ{\displaystyle \sigma }.

Bernoulli:Bernoulli(p){\displaystyle \operatorname {Bernoulli} (p)}Sample proportion of "successful trials"X¯{\displaystyle {\bar {X}}}nX¯Binomial(n,p){\displaystyle n{\bar {X}}\sim \operatorname {Binomial} (n,p)}
Two independent normal populations:

N(μ1,σ12){\displaystyle {\mathcal {N}}(\mu _{1},\sigma _{1}^{2})}  and N(μ2,σ22){\displaystyle {\mathcal {N}}(\mu _{2},\sigma _{2}^{2})}

Difference between sample means,X¯1X¯2{\displaystyle {\bar {X}}_{1}-{\bar {X}}_{2}}X¯1X¯2N(μ1μ2,σ12n1+σ22n2){\displaystyle {\bar {X}}_{1}-{\bar {X}}_{2}\sim {\mathcal {N}}\!\left(\mu _{1}-\mu _{2},\,{\frac {\sigma _{1}^{2}}{n_{1}}}+{\frac {\sigma _{2}^{2}}{n_{2}}}\right)}
Any absolutely continuous distributionF with densityfMedianX(k){\displaystyle X_{(k)}} from a sample of sizen = 2k − 1, where sample is orderedX(1){\displaystyle X_{(1)}} toX(n){\displaystyle X_{(n)}}fX(k)(x)=(2k1)!(k1)!2f(x)(F(x)(1F(x)))k1{\displaystyle f_{X_{(k)}}(x)={\frac {(2k-1)!}{(k-1)!^{2}}}f(x){\Big (}F(x)(1-F(x)){\Big )}^{k-1}}
Any distribution with distribution functionFMaximumM=max Xk{\displaystyle M=\max \ X_{k}} from a random sample of sizenFM(x)=P(Mx)=P(Xkx)=(F(x))n{\displaystyle F_{M}(x)=P(M\leq x)=\prod P(X_{k}\leq x)=\left(F(x)\right)^{n}}

References

[edit]
  1. ^Mooney, Christopher Z. (1999).Monte Carlo simulation. Thousand Oaks, Calif.: Sage. p. 2.ISBN 9780803959439.

External links

[edit]
Continuous data
Center
Dispersion
Shape
Count data
Summary tables
Dependence
Graphics
Study design
Survey methodology
Controlled experiments
Adaptive designs
Observational studies
Statistical theory
Frequentist inference
Point estimation
Interval estimation
Testing hypotheses
Parametric tests
Specific tests
Goodness of fit
Rank statistics
Bayesian inference
Correlation
Regression analysis
Linear regression
Non-standard predictors
Generalized linear model
Partition of variance
Categorical
Multivariate
Time-series
General
Specific tests
Time domain
Frequency domain
Survival
Survival function
Hazard function
Test
Biostatistics
Engineering statistics
Social statistics
Spatial statistics
Authority control databasesEdit this at Wikidata
Retrieved from "https://en.wikipedia.org/w/index.php?title=Sampling_distribution&oldid=1321843045"
Categories:
Hidden categories:

[8]ページ先頭

©2009-2026 Movatter.jp