I just worked out something interesting about ranks and determinants

K.I.L.E.R

Retarded moron
Veteran
If you have a matrix M and using gaussian elimination you bring it down to M^.
The rank under row elementary operations remains the same, with that so does the determinant however the inverse of both matrices has some differences.
I didn't actually get this from my book, I did it myself and I've just proven it.
I was originally thinking about how to make the fastest inverse matrix algorithm.

Basically I'm seeing a pattern between the 2 matrices.
If my suspicions holds true then I can devise a much faster way of calculating inverses of matrices than currently available.

I will continue to work through my proofs until I can find something I can take advantage of.
Fastest algorithm to calc inverses is O(n^2.5).

How come no one has bested that yet?

Guys: Can someone tell me a good blog site where I can blog about maths?

Here's the deal:
Convert M into row echelon form(M^).
Find the determinant to M^ which is the product of the main diagonal.
Multiply the original matrix by 1/d.
 
K.I.L.E.R said:
If you have a matrix M and using gaussian elimination you bring it down to M^.
The rank under row elementary operations remains the same, with that so does the determinant however the inverse of both matrices has some differences.
I didn't actually get this from my book, I did it myself and I've just proven it.
I was originally thinking about how to make the fastest inverse matrix algorithm.

Basically I'm seeing a pattern between the 2 matrices.
If my suspicions holds true then I can devise a much faster way of calculating inverses of matrices than currently available.

I will continue to work through my proofs until I can find something I can take advantage of.
Fastest algorithm to calc inverses is O(n^2.5).

How come no one has bested that yet?

Guys: Can someone tell me a good blog site where I can blog about maths?

Here's the deal:
Convert M into row echelon form(M^).
Find the determinant to M^ which is the product of the main diagonal.
Multiply the original matrix by 1/d.

Err... You got that backwards.
lets take a regular matrix A and its inverse Â, <b>then you can deduct</b> that det(A)*det(Â)=det(A*Â)=1, meaning det(Â) = 1/det(A). Note however that it generally <b>doesnt</b> work backwards. Take
1 1
0 1
for example.
 
I did the excercises and got them all right so I know what I'm doing. :)
If I said it funny then sorry but when I was doing the exercises I was right. :p
 
Umm if I dint understood your original post then sorry, care to correct me ? ;)
The basic deal is this: Matrix Multiplication alone is something that that has n^3 complexity in the most generic implementation. Lower compl. algorithms exist( i think they are down to around to n^2,3), and its speculated that it could go as low as n^2. Those compl. are asymptotical, and practically not used unless you have very big n (and even then the only algorithm frequently used is Strassen´s).

So what has this to do with getting inverses?
To prove you actually got a inverse you have to do a matrix multiplication. To satisfy being an inverse, all n^2 terms have to be satisfied. Comon sense should tell you that computing the solution is at least as hard, if not harder, than just evaluating the terms.
 
Okay I see where you're going.

A 2x2 matrix has a lot of operations too unfortunately:
2 divs.
7 muls.
2 adds.
3 movs.
2 sign changes.
 
Those compl. are asymptotical, and practically not used unless you have very big n (and even then the only algorithm frequently used is Strassen´s).
And even then, algorithms like Strassen's, and I'd imagine Winograd's are pretty severely unstable. The whole idea with them is really just that what would otherwise have been a series of dot products could also be acquired through linear combinations of sums and products of sums.

The only real hope for stability in large applications is counting on the idea that having lots of numbers means that errors will occasionally cancel each other out. I don't think it's any faster for n < 700 or something like that. Kind of like the way Cramer's rule is pretty darn slow in big-O, but it's still faster in practice than Gaussian Elimination for small n... (n < 6 or so?)
 
Back
Top