Go to previous page Go up Go to next page

4.2 Maximum a posteriori probability estimation

Suppose that in a given estimation problem we are not able to assign a particular cost function C(h', h) . Then a natural choice is a uniform cost function equal to 0 over a certain interval Ih of the parameter h . From Bayes theorem  [17] we have
p(x,h)p(h)- p(h| x) = p(x) , (27)
where p(x) is the probability distribution of data x . Then from Equation (24View Equation) one can deduce that for each data x the Bayes estimate is any value of h that maximizes the conditional probability p(h|x) . The density p(h| x) is also called the a posteriori probability density of parameter h and the estimator that maximizes p(h|x) is called the maximum a posteriori (MAP) estimator. It is denoted by ^ hMAP . We find that the MAP estimators are solutions of the following equation
@ log p(x,h) @ log p(h) ------------= - ----------, (28) @h @h
which is called the MAP equation .
Go to previous page Go up Go to next page