why MLE, MAP?
Oct. 13th, 2007 12:19 pm![[personal profile]](https://www.dreamwidth.org/img/silk/identity/user.png)
Why do we care about finding MLE, MAP parameter-settings? The focus on the maximum reminds me of the mode, a statistic that one doesn't care about most of the time. (Instead, one tends to focus on the mean and median).
Take the graph below:
If your posterior looks like this, is the red circle really your best guess about theta?
Why don't we work with the full uncertainty about theta? Because it tends to be computationally expensive to do so?
---
Suppose you have a very expensive Bayesian update. The standard answer is to use MCMC to sample from the posterior. But suppose you're not interested in the whole posterior, just a small region of it (or a particular point even, which is a region of measure zero). Are there ways to prune away points that are going to end up outside my region of interest, or to estimate my point?
Take the graph below:
likelihood graph |
If your posterior looks like this, is the red circle really your best guess about theta?
Why don't we work with the full uncertainty about theta? Because it tends to be computationally expensive to do so?
---
Suppose you have a very expensive Bayesian update. The standard answer is to use MCMC to sample from the posterior. But suppose you're not interested in the whole posterior, just a small region of it (or a particular point even, which is a region of measure zero). Are there ways to prune away points that are going to end up outside my region of interest, or to estimate my point?