回答:
アラン・チューリングはロナルド・フィッシャーの後に生まれたからです。
昔は、コンピューターの前に、これらすべてを手作業で行うか、せいぜい、今では電卓と呼ぶものを使用して行わなければなりませんでした。手段を比較するためのテストはこの方法で行うことができます-それは面倒ですが、可能です。変位値(中央値など)のテストは、この方法ではほとんど不可能です。
たとえば、分位点回帰は、比較的複雑な関数を最小化することに依存しています。これは手作業では不可能です。プログラミングで可能です。たとえば、KoenkerまたはWikipediaを参照してください。
分位点回帰は、OLS回帰よりも仮定が少なく、より多くの情報を提供します。
I would like to add a third reason to the correct reasons given by Harrell and Flom. The reason is that we use Euclidean distance (or L2) and not Manhattan distance (or L1) as our standard measure of closeness or error. If one has a number of data points and one wants a single number to estimate it, an obvious notion is to find the number that minimizes the 'error' that number creates the smallest difference between the chosen number and the numbers that constitute the data. In mathematical notation, for a given error function E, one wants to find . If one takes for E(x,y) the L2 norm or distance, that is then the minimizer over all is the mean. If one takes the L1 or Manhattan distance, the minimizer over all is the median. Thus the mean is the natural mathematical choice - if one is using L2 distance !
Often the mean is chosen over the median not because it's more representative, robust, or meaningful but because people confuse estimator with estimand. Put another way, some choose the population mean as the quantity of interest because with a normal distribution the sample mean is more precise than the sample median. Instead they should think more, as you have done, about the true quantity of interest.
One sidebar: we have a nonparametric confidence interval for the population median but there is no nonparametric method (other than perhaps the numerically intensive empirical likelihood method) to get a confidence interval for the population mean. If you want to stay distribution-free you might concentrate on the median.
Note that the central limit theorem is far less useful than it seems, as been discussed elsewhere on this site. It effectively assumes that the variance is known or that the distribution is symmetric and has a shape such that the sample variance is a competitive estimator of dispersion.