gusl: (Default)
[personal profile] gusl
Evil Genie: Is it possible to come up with an infinite sequence of Gaussians centered around 0 such that the sum never changes the total distribution Sigma_k<=i(X_k)? (i.e., the total s.d. stays at 1 forever) By how much will the s.d. need to increase each time? What kind of progression is this?


Limit Distribution for Products:
Is there a limit distribution for products? i.e. an analog of the Central Limit Theorem?

In this new case, adding a constant term, i.e. pushing the mean of the distribution to the right or left will affect the shape of the total distribution dramatically. Also, it will not be scale-invariant: multiplying distributions falling mostly within [-1,1] will make the s.d. smaller, whereas distributions falling mostly outside of that range will make the s.d even larger. A natural question is: at what s.d. is multiplication stable, i.e. for what value of sd(F), is it the case that sd(F*F) = sd(F)?

I don't know what the product of 2 Gaussians looks like, or how to find out, other than by programming a simulation.

Claim: Either this distribution is symmetric around 0, or it will be among positive values (i.e. density will be 0 for all negative values).
Argument: if there is more density in the positive values than in the negative values or vice-versa, the F^2 will have even more density in the positives, F^4 even more, and so forth.

My intuition says that multiplying two Gaussians centered around 0 will give you a shape that looks like a McDonald's M. I don't know why.

(no subject)

Date: 2007-01-20 05:55 am (UTC)
From: [identity profile] bhudson.livejournal.com
I thought Kalman filters were what you got when you multiplied two gaussians, but my memory is fuzzy.

(no subject)

Date: 2007-01-20 02:18 pm (UTC)
From: [identity profile] altamira16.livejournal.com
That doesn't sound right. Kalman filters are recursive algorithms used to make predictive models. I know that they are used in target tracking.


The product of two gaussians is a gaussian according to the gaussian product rule.

(no subject)

Date: 2007-01-20 06:47 pm (UTC)
From: [identity profile] bhudson.livejournal.com
OK, then it's probably in learning what KFs were that I learned the gaussian product rule. My knowledge in this field is that I read one paper years ago just so I could understand WTF my coworkers were saying at lunch.

(no subject)

Date: 2007-01-20 11:28 pm (UTC)
From: [identity profile] altamira16.livejournal.com
I learned about Kalman filters for work. I was trying to clarify what I thought they were due to the fact that I didn't really get to apply them. What do you do?

(no subject)

Date: 2007-01-21 01:06 am (UTC)
From: [identity profile] bhudson.livejournal.com
I am piled high and deep in triangles. But before I did that, I worked at NASA Ames, where various people around me were trying to figure out how to automatically detect a failure using something better than "engineer says a reading of 0.7 is bad."

(no subject)

Date: 2007-01-20 09:50 pm (UTC)
From: [identity profile] easwaran.livejournal.com
When adding independent variables, the variance of the sum equals the sum of the variance. Since the standard deviation is just the square root of the variance, there is no series satisfying your Evil Genie constraints.

(no subject)

Date: 2007-01-21 06:06 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I disagree.

Because of CLT, the standard deviation keeps smaller, unless the s.d. of the new distribution being added gets increasingly larger. But there is always an s.d. for the last distribution that will make the total variance just big enough to keep the total s.d. the same as before.

(no subject)

Date: 2007-01-22 01:19 am (UTC)
From: [identity profile] easwaran.livejournal.com
Oh, were you talking about the sum of n random variables divided by n? If you do the division, then the standard deviation gets smaller, because dividing a variable by n divides the variance by n^2. So if you take the average of n random variables, each of which has variance equal to the next odd number, then the result will have variance 1, just as the first does. And if they all have the same mean, then so will the result. This still leaves an open question as to whether the actual distributions will be the same.

(no subject)

Date: 2007-01-22 05:55 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Oh yes, that's what I meant.

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags