gusl: (Default)
[personal profile] gusl
Given a square matrix B, how does one test whether the sequence defined by B^p converges to the zero matrix, as p->infty?

(no subject)

Date: 2007-11-11 11:41 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I am reminded that this kind of convergence also tells you the steady-state of a Markov chain: which states are accessible from which states, and the limit probability of being at any given state. Of course, you'd never get the zero matrix in that case, since the rows always add up to 1.

(no subject)

Date: 2007-11-11 11:52 pm (UTC)
From: [personal profile] neelk
Off the top of my head:

Compute the eigenvalues of the matrix. If they're all less then zero, then you know that the matrix will go to zero.

(no subject)

Date: 2007-11-11 11:53 pm (UTC)
From: [personal profile] neelk
Argh. I meant, "if the absolute values values of the eigenvalues are less than one, then the limit is the zero matrix."

(no subject)

Date: 2007-11-12 12:03 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
Proof? Proof by authority would suffice here.

(frozen) (no subject)

Date: 2007-11-12 12:45 am (UTC)
From: [personal profile] neelk
Handwavy proof:
Every nonsingular square matrix A can be written as A = P * D * P^(-1), where P are the unit eigenvector matrix, 
and D have the eigenvalues on the diagonal. This is by the eigen decomposition theorem.

Next, we show by induction that A^k = P * D^k * P^(-1)

Case: k = 0 

  A^0 = I 
      = P * P^(-1)
      = P * I * P^(-1)
      = P * D^0 * P^(-1)

Case: k = n+1
  
  A^(n+1) = A^(n) * A 
          = P * D^n * P^-1 * A                by induction hypothesis
          = P * D^n * P^-1 * P * D * P^(-1)   by def of A
          = P * D^n * I * D * P^(-1)
          = P * D^n * D * P^(-1)
          = P * D^(n+1) * P^(-1)

Now, note that if you have a diagonal matrix D, then D^k is also a diagonal matrix whose diagonal elements 
are the k-th powers of the diagonal elements of D. 

So if all of those diagonal elements have absolute values less than 1, then in the limit D^k will go to zero. 

(no subject)

Date: 2007-11-12 12:45 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
I have a counter-example:

The matrix:
A B
0.0000 2.0000
-1.0000 0.0000

only has 0 as an eigenvalue. (Any matrix that rotates and enlarges the magnitude of the vector will do too; e.g. kI * rotation matrix)


If we take powers:

B^11 = 2 x 2 matrix
0 -64
32 0

B^21 = 2 x 2 matrix
0 2048
-1024 0

etc.

(no subject)

Date: 2007-11-12 12:49 am (UTC)
From: [personal profile] neelk
Your example works because it's a singular matrix (ie, has a zero determinant). What I suggest still seems to work for the others. I guess you'll have to figure out something else to do for noninvertible matrices.

(no subject)

Date: 2007-11-12 12:53 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
No. The determinant of

0.0000 2.0000
-1.0000 0.0000

is 2.

(no subject)

Date: 2007-11-12 12:58 am (UTC)
From: [identity profile] rdore.livejournal.com
Um, you matrix eigenvalues are definitely not zero. They have norm bigger than one too.

(no subject)

Date: 2007-11-12 02:24 am (UTC)
From: [identity profile] gustavolacerda.livejournal.com
you are correct. They are +/-sqrt(-2). I didn't catch them because I assumed they were real.

(no subject)

Date: 2007-11-12 01:22 am (UTC)
From: [identity profile] rdore.livejournal.com
For some reason, I can't reply to your proof below. But in any case, you definitely can't assume that all matrices are diagonalizable. That's false.

(no subject)

Date: 2007-11-12 04:16 pm (UTC)
From: [personal profile] neelk
Aren't all nonsingular matrices diagonalizable? Or did I screw that up somehow?

(no subject)

Date: 2007-11-12 09:37 pm (UTC)
From: [identity profile] rdore.livejournal.com
For example:

1 1
0 1

Is nonsingular but not diagonalizable. The only eigenvalue is 1, and the eigenspace for 1 is only 1-dimensional.

(no subject)

Date: 2007-11-12 02:47 pm (UTC)
From: [identity profile] gustavolacerda.livejournal.com
You were correct. I think "moduli" is a better term though, since the eigenvalues can be complex.

(no subject)

Date: 2007-11-12 01:21 am (UTC)
From: [identity profile] rdore.livejournal.com
So, if B is diagonalizable (over the complex numbers), then Bp -> 0 is equivalent to all the eigenvalues being norm less than one.

If B isn't diagonalizable, you'd have to look at the Jordan normal form. I think if all the eigenvalues are norm less than 1, it still goes to zero. But I'm having trouble working out the details in my head, and I'm too lazy right now to actually get a piece of paper and do the calculation out.

(no subject)

Date: 2007-11-12 09:52 pm (UTC)
From: [identity profile] rdore.livejournal.com
Consider a Jordan block with eigenvalue lambda, where norm(lambda) < 1.

If lambda = 0, then powers of the block will keep becoming more upper triangular with each power, so after n steps it will just be 0.

If lambda != 0, the entries k places above the diagonal will never be more
((k+1)/lambda) times bigger than the entries just below them. (This can be proven by induction on k.) This means that no entry will ever be more than (n!)/(lambda^n) times as big as the diagonal. Since this is a constant, and the diagonal goes to zero, all entries will go to zero.
Edited Date: 2007-11-12 09:53 pm (UTC)

February 2020

S M T W T F S
      1
2345678
9101112131415
16171819202122
23242526272829

Most Popular Tags

Style Credit

Expand Cut Tags

No cut tags