The problem is to find a test that is in some way optimal. There are several approaches to finding such a test. The subject is covered in detail in many books on statistics, for example, see [72, 51, 80, 83].
In the Bayesian approach we assign costs to our decisions; in particular we introduce positive numbers
,
, where
is the cost incurred by choosing hypothesis
when hypothesis
is true. We define the conditional risk
of a decision rule
for each hypothesis as
Very often in practice we do not have control over or access to the mechanism generating the state of nature
and we are not able to assign priors to various hypotheses. In such a case one criterion is to seek a decision
rule that minimizes, over all , the maximum of the conditional risks,
and
. A decision
rule that fulfills that criterion is called a minimax rule.
In many problems of practical interest the imposition of a specific cost structure on the decisions made is not possible or desirable. The Neyman–Pearson approach involves a trade-off between the two types of errors that one can make in choosing a particular hypothesis. The Neyman–Pearson design criterion is to maximize the power of the test (probability of detection) subject to a chosen significance of the test (false alarm probability).
It is remarkable that all three very different approaches – Bayesian, minimax, and Neyman–Pearson – lead
to the same test called the likelihood ratio test [44]. The likelihood ratio
is the ratio of the pdf when
the signal is present to the pdf when it is absent:
http://www.livingreviews.org/lrr-2012-4 |
Living Rev. Relativity 15, (2012), 4
![]() This work is licensed under a Creative Commons License. E-mail us: |