gusl: (Default)
Being fond of non-parametrics, I have an instinctive dislike of Gaussian models (maybe because they are overused by people who don't know statistics).

Q: So why might one be satisfied with the Gaussian model, while knowing that it is very wrong (e.g. if you know the truth should be a skewed, fat-tailed distribution, e.g. a power-law distribution)?

A: Because it will estimate the mean and variance "correctly". For any prior, if you Gaussianize it, and do the usual Bayesian update, the posterior obtained will have the same (μ,σ2) as the (μ,σ2) obtained by using the original prior... and using this you can get confidence bounds on the quantiles of the posterior e.g. through Chebyshev's inequality. It's not giving you maximum power, but it's simple.

I think this also extends to the multivariate case, i.e. all the means and covariances are preserved when you Gaussianize the prior. And you can still use some analog of Chebyshev's inequality that uses the Mahalanobis distance from the mean.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Syndicate

RSS Atom

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags