2013-01-23

Things to think about when inverting a matrix

Recently I am working on an interesting topic: Laplace-Beltrami Operator (LBO). And, I learned something that I never thought about in doing math computation. They are studied in computational mathematics but I think that any scientific programmer should know it. Here is the first thing I learned.

In theory, the product between a matrix and its inverse is an identity matrix. However, it is not always the case, in the world of numerical computing. Take a look at this (in iPython).

In [1]: from numpy import *

In [2]: a=array([[6,9],[4,100]])

In [3]: dot(a, linalg.inv(a))
Out[3]: 
array([[  1.00000000e+00,   5.20417043e-18],
       [ -6.93889390e-17,   1.00000000e+00]])

Note that the result is not an identity matrix. This is due to how computers do matrix inverse, numerically. This usually happens when certain element(s) of the matrix is(are) substantially larger or smaller than the rest.

If you are dealing with very precise computation using the inverse, you may end up with an error that you cannot ignore. If you are using the result for machine learning, the error may cause misclassification.

To estimate how big the error can be, people have invented many measures, one of which is known as the condition number (Wikipedia  MathWorld  MATLAB)

Here is a quick estimation for matrix inverse: the product between the determinants of the original matrix and its inverse. Ideally, you should expect the number 1 as a result. Below, you will see that the result is closer to 1 if the elements of a matrix vary in a smaller range.

In [4]: linalg.det(a)*linalg.det(linalg.inv(a))
Out[4]: 0.99999999999999289

In [5]: a=array([[6,9],[4,10]])

In [6]: dot(a, linalg.inv(a))
Out[6]: 
array([[  1.00000000e+00,   0.00000000e+00],
       [ -5.55111512e-17,   1.00000000e+00]])

In [7]: linalg.det(a)*linalg.det(linalg.inv(a))
Out[7]: 0.99999999999999911

Therefore, it might be a good idea to avoid matrix inverse if you have workaround. If you have to, providing the error maybe a good idea.

okay, that's all for today. I need to revise my code using an approach that avoids matrix inverse now. Tomorrow, we will discuss eigenvalue calculation.

Acknowledgment: Dr. Eric You Xu @ Google, a bachelor of math and PhD of computer science, who helped to explain to me why an author who implemented LBO in C (?) does not recommend me using an approach that requires matrix inverse.

2 comments:

Peter Zhou said...

Nice to see another numpy user.
How do you like it comparing to R or Matlab?
I use it mainly because it's parallelization capability.

Forrest Bao said...

What do you mean by R or MATLAB's parallelization capability? Do you mean they automatically parallel jobs onto multi scores?