Things to think about when solving eigenvalue problems

Yesterday I talked about numerical problems in matrix inverse. Today, I will talk about problems about eigenvalue computation, both numerically and analytically. They are all painful lessons I learned recently.

1. Is your matrix very sparse? Look at this example:
In [1]: from numpy import *

In [2]: a=array([[1,0,2],[0,0,3],[4,0,0]])

In [3]: a
Out[3]: 
array([[1, 0, 2],
       [0, 0, 3],
       [4, 0, 0]])

In [4]: linalg.inv(a)
---------------------------------------------------------------------------
LinAlgError                               Traceback (most recent call last)

/share/mindboggle/ in ()

/usr/lib/pymodules/python2.7/numpy/linalg/linalg.pyc in inv(a)
    443     """
    444     a, wrap = _makearray(a)
--> 445     return wrap(solve(a, identity(a.shape[0], dtype=a.dtype)))
    446 
    447 

/usr/lib/pymodules/python2.7/numpy/linalg/linalg.pyc in solve(a, b)
    326     results = lapack_routine(n_eq, n_rhs, a, n_eq, pivots, b, n_eq, 0)
    327     if results['info'] > 0:
--> 328         raise LinAlgError, 'Singular matrix'
    329     if one_eq:
    330         return wrap(b.ravel().astype(result_t))

LinAlgError: Singular matrix

A matrix is singular if it is not invertible. MathWorld - Singular Matrix Wikipedia - Invertible matrix

If your matrix has a high sparsity, it likely that its determinant is \( 0\). In this case, it may be difficult to find the solution for its eigenvalue problem. (The eigenvalues of a matrix \( A \) are solutions to the equation $$ \begin{equation} \det(A-\lambda I) = 0 \label{eigeneq} \end{equation} $$ while this equation may be \(0=0\) if the sparsity is high "enough".)

To avoid this, an idea is to add something into this matrix to "disrupt" its sparsity. For example, if the matrix is non-symmetric, you can add itself with its inverse. (In my case, this is the approach of Levy [1])

2. Do you need all the eigenvalues?
Sometimes you don't need all solutions of Eq.\eqref{eigeneq}. If you consider eigenvalues as fingerprints or DNAs of a matrix, all you need are the few most important ones. Yes, like SVD, PCA or ICA. So you probably just need the first few largest or smallest (I believe you just need to change the signs of all elements to go from one extreme to the other) eigenvalues.

To the best of my understanding, numerical solvers do not solve Eq.\eqref{eigeneq} as we do manually. They use iteration algorithms like power iteration (a.k.a. Von Mises iteration), or its improved versions, such as Lanczos algorithm for symmetric ones and Arnoldi algorithm for non-symmetric ones.

I am not a math guy. Critiques and comments are welcome.

Acknowledgment: Dr. Eric You Xu @ Google, a bachelor of math and PhD of computer science, who helped to explain to me why an author who implemented LBO in C (?) does not recommend me solving all eigenvalues.

References:
[1] B Levy, Laplace-Beltrami eigenfunctions: towards an algorithm that understands geometry. In Proc. of Shape Modeling and Applications, page 13, 2006

1 comment:

Kumar said...

A new method for accelerating Arnoldi algorithms for large scale eigenproblems.

Check this paper for a new strategy.