If you have a matrix M and using gaussian elimination you bring it down to M^.
The rank under row elementary operations remains the same, with that so does the determinant however the inverse of both matrices has some differences.
I didn't actually get this from my book, I did it myself and I've just proven it.
I was originally thinking about how to make the fastest inverse matrix algorithm.
Basically I'm seeing a pattern between the 2 matrices.
If my suspicions holds true then I can devise a much faster way of calculating inverses of matrices than currently available.
I will continue to work through my proofs until I can find something I can take advantage of.
Fastest algorithm to calc inverses is O(n^2.5).
How come no one has bested that yet?
Guys: Can someone tell me a good blog site where I can blog about maths?
Here's the deal:
Convert M into row echelon form(M^).
Find the determinant to M^ which is the product of the main diagonal.
Multiply the original matrix by 1/d.
The rank under row elementary operations remains the same, with that so does the determinant however the inverse of both matrices has some differences.
I didn't actually get this from my book, I did it myself and I've just proven it.
I was originally thinking about how to make the fastest inverse matrix algorithm.
Basically I'm seeing a pattern between the 2 matrices.
If my suspicions holds true then I can devise a much faster way of calculating inverses of matrices than currently available.
I will continue to work through my proofs until I can find something I can take advantage of.
Fastest algorithm to calc inverses is O(n^2.5).
How come no one has bested that yet?
Guys: Can someone tell me a good blog site where I can blog about maths?
Here's the deal:
Convert M into row echelon form(M^).
Find the determinant to M^ which is the product of the main diagonal.
Multiply the original matrix by 1/d.