science

Standar error and confidence interval

T. Yamato

July, 27, 2021

The mean and standard error of $N$​​ measurements of qantytiy $A_i :(i=1, …, N)$​​​ are \(\left<A\right> = \frac{1}{N} \sum_{i=1}^{N}A_i, \: \sigma_e=\frac{\sigma}{\sqrt{N}},\) , respectively, where $\sigma$​​ is the standard deviation of this measurements: \(\sigma^2 = \frac{1}{N} \sum_{i=1}^{N}(A_i - \left<A\right>)^2.\) For instance, the confidence interval (CI) of the conficence level 95 % for the real mean $\overline{A}$​ is \(\left[\left<A\right>-1.96s, \left<A\right>+1.96s \right]\)​, where $s$​ is the estimate of $\sigma_e$​. In other words, we can expect that the real mean is found in the interval between \(\left<A\right>-1.96s\)​ and \(\left<A\right>+1.96s\)​​ with the probability of 95%.