gusl: (Default)
I am wondering if, instead of running MCMC and hoping that it has "mixed" ("achieved stationarity"), there are approaches based on computing (or approximating) the principal left eigenvector of the transition matrix.

Of course, in continuous spaces, this "matrix" has as many entries as S^2, where S is the space our parameters live in... so our "principal eigenvector" becomes the "principal eigenfunction". Functional analysts, how do you compute this?

If it helps, we might want to choose a sparse proposal (such as the one corresponding to Gibbs sampling, in which all transitions changing more than one parameter have probability density zero)
gusl: (Default)
Ken Binmore - Making Decisions in Large Worlds is a very interesting philosophical paper, which attacks the "school" of "Bayesianites", by which I think he means those who subscribe some type of extreme Bayesianism.
gusl: (Default)
I'm very interested in modeling how people draw maps from memory. They use many kinds of information regarding distances between cities, angles along coastlines and borders, shapes and sizes of areas, etc, that somehow must come together coherently (and hopefully approximately correctly).

Since the piece of paper enforces coherence for free, it would be a waste of memory resources for humans to try to enforce it in their head.

It's interesting to ponder how people integrate conflicting information. This is just like... probability elicitation!

Profile

gusl: (Default)
gusl

December 2016

S M T W T F S
    123
45678910
11121314151617
18 192021222324
25262728293031

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags