optimising code

K.I.L.E.R

Retarded moron
Veteran
A few questions along with my own summary about optimising code. Please correct me if I am wrong.

Optimising code is about having your code executed in the fastest possible way.
To do that I would have to have the least possible code used to achieve something?
Though I see a problem, there are many ways of achieving one thing (could be anything), what I am trying to say is that while I may have the least possible code to have my program do what it has to but it may take more CPU cycles to do it.

So, what is the secret to optimising code?
 
There's so many facets you can't ask for the whole thing at once:

-CPUs with caches need to have tight loops to keep cache coherency, but sometimes you end up executing more code (but faster) do to less data cache misses.

You can optimize for smaller code, or for faster code.
-Explicite loops (for i=0;i<5;i++) can be "unrolled" into 5 repetitions of the same code (larger, but faster, since no decisions need to be made)

Putting common subexpressions into temporary variables can make things much quicker (particularly if you're doing a lot of de-referencing on a crappy compiler) Its especially useful to keep these dereferences out of tight loops.
i.e.
a = MyArray.MySubArray[3].a
b = MyArray.MySubArray[3].b
c = MyArray.MySubArray[3].c
d = MyArray.MySubArray[3].d
Could be improved by
SubArrayType *pSubArray = &MyArray.MySubArray[3];
a = pSubArray.a
b = pSubArray.b
c = pSubArray.c
d = pSubArray.d
In this case, there is no dereferencing except for the first time.

If you know a,b,c,d are in contigious words, you can help the compiler along even more by
int *pInt = (int*)(&MyArray.MySubArray[3]);
a = *pInt++;
b = *pInt++;
c = *pInt++;
d = *pInt++;
Now the pointer arithematic is straight forward and unambigious to the code generator (though its more difficult to read).

Sometimes a multiply is faster than shift right/left (or sometimes the other way around).

The cpu architecture, the compiler, and even the application type has so much to do with it that somebody who's good at optimizing one set may do things that are exactly wrong on another set.

(P.S., if you're looking at a paper to write, there's a ton of information out there on optimization techniques to use as research. It doesn't compare to the buttered toast/cat investigation, though)
 
Optimising code is about having your code executed in the fastest possible way.
To do that I would have to have the least possible code used to achieve something?
Though I see a problem, there are many ways of achieving one thing (could be anything), what I am trying to say is that while I may have the least possible code to have my program do what it has to but it may take more CPU cycles to do it.

Optimizing in general is to improve your solution based on the context of the situation. Sometimes executing slower is worthwhile if you're trying to reduce code size and taking out a few instructions might improving things. It's important to understand what you're looking at. It's not always about speed.

But when it is, yes it's basically trading time vs space. You either chew a few cycles or cache some results.
 
Back
Top