|
March 12: Mark Tygert, CIMS
Obvious yet overlooked means for testing statistical theories: an informal
presentation
There are at least two obvious ways to check whether some measured data
does not agree with a specified statistical theory. For definiteness,
suppose that an experiment produces independent and identically
distributed (i.i.d.) draws, and that we would like to test whether the
draws do not arise from a specific probability density specified by the
proposed statistical theory.
One test is to estimate the cumulative distribution function (the
indefinite integral of the probability density function) using the
empirical data, and then to consider the discrepancy between the empirical
distribution and the actual distribution predicted by the theory.
Kolmogorov and Smirnov introduced such a test in the 1930s, along with a
profound analysis of its significance levels, and by now there are many
interesting variants.
A different test which would seem to be obvious is to check whether any of
the given i.i.d. draws has a probability that is substantially smaller
than expected under the specified probability density function. Such
testing for generalized outliers is far from the standard practice, even
though it can be much more powerful in many circumstances (indeed, the
indefinite integral in the definition of the cumulative distribution
function for the Kolmogorov-Smirnov test can smooth over variations in the
probability density function). We will discuss two variations on the
alternative, complementary test, and note when they are more effective
than the classical approaches.
Further information is available at
http://cims.nyu.edu/~tygert/stats.ps
and
http://cims.nyu.edu/~tygert/stats.pdf
|