gusl: (Default)
[personal profile] gusl
Why do we care about finding MLE, MAP parameter-settings? The focus on the maximum reminds me of the mode, a statistic that one doesn't care about most of the time. (Instead, one tends to focus on the mean and median).

Take the graph below:


likelihood graph likelihood graph



If your posterior looks like this, is the red circle really your best guess about theta?

Why don't we work with the full uncertainty about theta? Because it tends to be computationally expensive to do so?

---

Suppose you have a very expensive Bayesian update. The standard answer is to use MCMC to sample from the posterior. But suppose you're not interested in the whole posterior, just a small region of it (or a particular point even, which is a region of measure zero). Are there ways to prune away points that are going to end up outside my region of interest, or to estimate my point?

(no subject)

Date: 2007-10-14 08:19 pm (UTC)
From: [identity profile] trufflesniffer.livejournal.com
Isn't the usual response to this something like: "Ah, but we're assuming the property of asymptotic normality, and so we won't get that sort of graph"; and, "If we're not confident the size of the sample is sufficient for the above property to be assumed, we'll bootstrap"?

(Sorry, not completely sure about this, it's just roughly my understanding of the theory underlying ML estimation)

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags