# 估计量的偏差

（重定向自无偏估计

## 定义

${\displaystyle \operatorname {Bias} _{\theta }[\,{\hat {\theta }}\,]=\operatorname {E} _{\theta }[\,{\hat {\theta }}\,]-\theta =\operatorname {E} _{\theta }[\,{\hat {\theta }}-\theta \,],}$

## 例子

### 样本方差

X1, ..., Xn期望μ方差σ2独立同分布（i.i.d.）随机变量。如果样本均值与未修正样本方差定义为

${\displaystyle {\overline {X}}={\frac {1}{n}}\sum _{i=1}^{n}X_{i},\qquad S^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(X_{i}-{\overline {X}}\,\right)^{2},}$

S2σ2 的一个有偏估计量，因为

{\displaystyle {\begin{aligned}\operatorname {E} [S^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}{\big (}X_{i}-{\overline {X}}{\big )}^{2}\right]=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}{\bigg (}(X_{i}-\mu )-({\overline {X}}-\mu ){\bigg )}^{2}{\bigg ]}\\[8pt]&=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}{\bigg (}(X_{i}-\mu )^{2}-2({\overline {X}}-\mu )(X_{i}-\mu )+({\overline {X}}-\mu )^{2}{\bigg )}{\bigg ]}\\[8pt]&=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-\mu )^{2}-{\frac {2}{n}}({\overline {X}}-\mu )\sum _{i=1}^{n}(X_{i}-\mu )+{\frac {1}{n}}({\overline {X}}-\mu )^{2}\sum _{i=1}^{n}1{\bigg ]}\\[8pt]&=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-\mu )^{2}-{\frac {2}{n}}({\overline {X}}-\mu )\sum _{i=1}^{n}(X_{i}-\mu )+{\frac {1}{n}}({\overline {X}}-\mu )^{2}\cdot n{\bigg ]}\\[8pt]&=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-\mu )^{2}-{\frac {2}{n}}({\overline {X}}-\mu )\sum _{i=1}^{n}(X_{i}-\mu )+({\overline {X}}-\mu )^{2}{\bigg ]}\\[8pt]\end{aligned}}}

S2 是有偏的原因源于样本均值是 μ（OLS）估计量这个事实：${\displaystyle {\overline {X}}}$  是令 ${\displaystyle \sum _{i=1}^{n}(X_{i}-{\overline {X}})^{2}}$  尽可能小的数。也就是说，当任何其他数代入这个求和中时，这个和只会增加。尤其是，在选取 ${\displaystyle \mu \neq {\overline {X}}}$  就会得出，

${\displaystyle {\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-{\overline {X}})^{2}<{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-\mu )^{2},}$

{\displaystyle {\begin{aligned}\operatorname {E} [S^{2}]&=\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-{\overline {X}})^{2}{\bigg ]}<\operatorname {E} {\bigg [}{\frac {1}{n}}\sum _{i=1}^{n}(X_{i}-\mu )^{2}{\bigg ]}=\sigma ^{2}.\end{aligned}}}

${\displaystyle s^{2}={\frac {1}{n-1}}\sum _{i=1}^{n}(X_{i}-{\overline {X}}\,)^{2},}$

${\displaystyle \operatorname {E} {\big [}({\overline {X}}-\mu )^{2}{\big ]}={\frac {1}{n}}\sigma ^{2}.}$

## 参考文献

• Brown, George W. "On Small-Sample Estimation." The Annals of Mathematical Statistics, vol. 18, no. 4 (Dec., 1947), pp. 582–585. JSTOR 2236236.
• "A General Concept of Unbiasedness" The Annals of Mathematical Statistics, vol. 22, no. 4 (Dec., 1951), pp. 587–592. JSTOR 2236928.
• , 1961. "A Unified Theory of Estimation, I", The Annals of Mathematical Statistics, vol. 32, no. 1 (Mar., 1961), pp. 112–135.
• Van der Vaart, H. R., 1961. "Some Extensions of the Idea of Bias" The Annals of Mathematical Statistics, vol. 32, no. 2 (June 1961), pp. 436–447.
• Pfanzagl, Johann. 1994. Parametric Statistical Theory. Walter de Gruyter.
• Stuart, Alan; Ord, Keith; Arnold, Steven [F.]. Classical Inference and the Linear Model. Kendall's Advanced Theory of Statistics 2A. Wiley. 2010. ISBN 0-4706-8924-2..
• Voinov, Vassily [G.]; Nikulin, Mikhail [S.]. Unbiased estimators and their applications. 1: Univariate case. Dordrect: Kluwer Academic Publishers. 1993. ISBN 0-7923-2382-3.
• Voinov, Vassily [G.]; Nikulin, Mikhail [S.]. Unbiased estimators and their applications. 2: Multivariate case. Dordrect: Kluwer Academic Publishers. 1996. ISBN 0-7923-3939-8.
• Klebanov, Lev [B.]; Rachev, Svetlozar [T.]; Fabozzi, Frank [J.]. Robust and Non-Robust Models in Statistics. New York: Nova Scientific Publishers. 2009. ISBN 978-1-60741-768-2.

## 外部链接

1. ^ Richard Arnold Johnson; Dean W. Wichern. Applied Multivariate Statistical Analysis. Pearson Prentice Hall. 2007 [10 August 2012]. ISBN 978-0-13-187715-3. （原始内容存档于2016-05-29）.