In theory, the product between a matrix and its inverse is an identity matrix. However, it is not always the case, in the world of numerical computing. Take a look at this (in iPython).
In : from numpy import * In : a=array([[6,9],[4,100]]) In : dot(a, linalg.inv(a)) Out: array([[ 1.00000000e+00, 5.20417043e-18], [ -6.93889390e-17, 1.00000000e+00]])
Note that the result is not an identity matrix. This is due to how computers do matrix inverse, numerically. This usually happens when certain element(s) of the matrix is(are) substantially larger or smaller than the rest.
If you are dealing with very precise computation using the inverse, you may end up with an error that you cannot ignore. If you are using the result for machine learning, the error may cause misclassification.
To estimate how big the error can be, people have invented many measures, one of which is known as the condition number (Wikipedia MathWorld MATLAB)
Here is a quick estimation for matrix inverse: the product between the determinants of the original matrix and its inverse. Ideally, you should expect the number 1 as a result. Below, you will see that the result is closer to 1 if the elements of a matrix vary in a smaller range.
In : linalg.det(a)*linalg.det(linalg.inv(a)) Out: 0.99999999999999289 In : a=array([[6,9],[4,10]]) In : dot(a, linalg.inv(a)) Out: array([[ 1.00000000e+00, 0.00000000e+00], [ -5.55111512e-17, 1.00000000e+00]]) In : linalg.det(a)*linalg.det(linalg.inv(a)) Out: 0.99999999999999911
Therefore, it might be a good idea to avoid matrix inverse if you have workaround. If you have to, providing the error maybe a good idea.
okay, that's all for today. I need to revise my code using an approach that avoids matrix inverse now. Tomorrow, we will discuss eigenvalue calculation.
Acknowledgment: Dr. Eric You Xu @ Google, a bachelor of math and PhD of computer science, who helped to explain to me why an author who implemented LBO in C (?) does not recommend me using an approach that requires matrix inverse.