I am reminded that this kind of convergence also tells you the steady-state of a Markov chain: which states are accessible from which states, and the limit probability of being at any given state. Of course, you'd never get the zero matrix in that case, since the rows always add up to 1.
Every nonsingular square matrix A can be written as A = P * D * P^(-1), where P are the unit eigenvector matrix,
and D have the eigenvalues on the diagonal. This is by the eigen decomposition theorem.
Next, we show by induction that A^k = P * D^k * P^(-1)
Case: k = 0
A^0 = I
= P * P^(-1)
= P * I * P^(-1)
= P * D^0 * P^(-1)
Case: k = n+1
A^(n+1) = A^(n) * A
= P * D^n * P^-1 * A by induction hypothesis
= P * D^n * P^-1 * P * D * P^(-1) by def of A
= P * D^n * I * D * P^(-1)
= P * D^n * D * P^(-1)
= P * D^(n+1) * P^(-1)
Now, note that if you have a diagonal matrix D, then D^k is also a diagonal matrix whose diagonal elements
are the k-th powers of the diagonal elements of D.
So if all of those diagonal elements have absolute values less than 1, then in the limit D^k will go to zero.
Your example works because it's a singular matrix (ie, has a zero determinant). What I suggest still seems to work for the others. I guess you'll have to figure out something else to do for noninvertible matrices.
So, if B is diagonalizable (over the complex numbers), then Bp -> 0 is equivalent to all the eigenvalues being norm less than one.
If B isn't diagonalizable, you'd have to look at the Jordan normal form. I think if all the eigenvalues are norm less than 1, it still goes to zero. But I'm having trouble working out the details in my head, and I'm too lazy right now to actually get a piece of paper and do the calculation out.
Consider a Jordan block with eigenvalue lambda, where norm(lambda) < 1.
If lambda = 0, then powers of the block will keep becoming more upper triangular with each power, so after n steps it will just be 0.
If lambda != 0, the entries k places above the diagonal will never be more ((k+1)/lambda) times bigger than the entries just below them. (This can be proven by induction on k.) This means that no entry will ever be more than (n!)/(lambda^n) times as big as the diagonal. Since this is a constant, and the diagonal goes to zero, all entries will go to zero.
(no subject)
Date: 2007-11-11 11:41 pm (UTC)(no subject)
Date: 2007-11-11 11:52 pm (UTC)Compute the eigenvalues of the matrix. If they're all less then zero, then you know that the matrix will go to zero.
(no subject)
Date: 2007-11-11 11:53 pm (UTC)(no subject)
Date: 2007-11-12 12:03 am (UTC)(frozen) (no subject)
Date: 2007-11-12 12:45 am (UTC)Every nonsingular square matrix A can be written as A = P * D * P^(-1), where P are the unit eigenvector matrix, and D have the eigenvalues on the diagonal. This is by the eigen decomposition theorem. Next, we show by induction that A^k = P * D^k * P^(-1) Case: k = 0 A^0 = I = P * P^(-1) = P * I * P^(-1) = P * D^0 * P^(-1) Case: k = n+1 A^(n+1) = A^(n) * A = P * D^n * P^-1 * A by induction hypothesis = P * D^n * P^-1 * P * D * P^(-1) by def of A = P * D^n * I * D * P^(-1) = P * D^n * D * P^(-1) = P * D^(n+1) * P^(-1) Now, note that if you have a diagonal matrix D, then D^k is also a diagonal matrix whose diagonal elements are the k-th powers of the diagonal elements of D. So if all of those diagonal elements have absolute values less than 1, then in the limit D^k will go to zero.(no subject)
Date: 2007-11-12 12:45 am (UTC)The matrix:
A B
0.0000 2.0000
-1.0000 0.0000
only has 0 as an eigenvalue. (Any matrix that rotates and enlarges the magnitude of the vector will do too; e.g. kI * rotation matrix)
If we take powers:
B^11 = 2 x 2 matrix
0 -64
32 0
B^21 = 2 x 2 matrix
0 2048
-1024 0
etc.
(no subject)
Date: 2007-11-12 12:49 am (UTC)(no subject)
Date: 2007-11-12 12:53 am (UTC)0.0000 2.0000
-1.0000 0.0000
is 2.
(no subject)
Date: 2007-11-12 12:58 am (UTC)(no subject)
Date: 2007-11-12 02:24 am (UTC)(no subject)
Date: 2007-11-12 01:22 am (UTC)(no subject)
Date: 2007-11-12 04:16 pm (UTC)(no subject)
Date: 2007-11-12 09:37 pm (UTC)1 1
0 1
Is nonsingular but not diagonalizable. The only eigenvalue is 1, and the eigenspace for 1 is only 1-dimensional.
(no subject)
Date: 2007-11-12 02:47 pm (UTC)(no subject)
Date: 2007-11-12 01:21 am (UTC)If B isn't diagonalizable, you'd have to look at the Jordan normal form. I think if all the eigenvalues are norm less than 1, it still goes to zero. But I'm having trouble working out the details in my head, and I'm too lazy right now to actually get a piece of paper and do the calculation out.
(no subject)
Date: 2007-11-12 09:52 pm (UTC)If lambda = 0, then powers of the block will keep becoming more upper triangular with each power, so after n steps it will just be 0.
If lambda != 0, the entries k places above the diagonal will never be more
((k+1)/lambda) times bigger than the entries just below them. (This can be proven by induction on k.) This means that no entry will ever be more than (n!)/(lambda^n) times as big as the diagonal. Since this is a constant, and the diagonal goes to zero, all entries will go to zero.
(no subject)
Date: 2007-11-12 10:21 pm (UTC)(no subject)
Date: 2007-11-12 10:52 pm (UTC)