wouldnt it be the opposite alby ?
because if you lower the speed the timmings become tighter
I didn't say timings (how the memory is accessed by the northbridge), but I did say latency (how quickly information within the memory can be accessed by the CPU.) I think this is where the confusion sets in...
Memory timings are effectively latency on the
memory bus. Thus, increasing your timings from say 4-4-4-10 to 5-5-5-12 would result in higher latency from the northbridge interface to the memory module being accessed. If everything else remains equal (FSB speed, memory speed, chipset timings) then a higher memory bus latency will result in lower memory performance for the rest of the system.
But Intel platforms have a
front side bus -- the interface from CPU to northbridge. The latency here is set by the northbridge "performance value" (technically called tRD). While this bus serves as the primary I/O bottleneck for the entire system, you
could do a bit of latency-hiding trickery if you get the external busses going fast enough.
So, if you ramp your memory bus from the lock-step 1:1 multiplier (a 333mhz FSB getting 667mhz ram) to, let's say, a 1:2 multipler (a 333mhz FSB getting 1333mhz ram) then the timings at the memory bus end of things will be at least partially hidden because the northbridge is acting as the memory controller.
Using my example: 4-4-4-10 timings on DDR2-667 vs 5-5-5-18 timings on DDR3-1333 memory, operating in dual channel, both on a 333FSB... The 1333Mhz set, even though it has more relaxed timings, still has a solid chance of performing better simply because the memory bus is operating at twice the speed, which can effectively cut the latency down to 2.5-2.5-2.5-9 in terms of absolute time.
Obviously, simple math is your friend before assuming that turning up memory speed beyond that of the FSB is actually going to make a difference. Running ram at 20% faster speeds (667 vs 800) at 20% higher latency (4-4-4-10 vs 5-5-5-12) doesn't do you any good and will result in higher heat and power draw. And all you're
really doing is reducing latency, which in terms of overall bandwidth, isn't going to do a whole lot...
So, a 100% increase in memory speed (like my last example of using DDR3-1333) might net you a ~20-30% reduction in access times of DDR2-667 at 4-4-4-10 if both are using a 333FSB, which will slightly increase your available bandwidth as well (~10% maybe?). Does that extra heat and stress on your northbridge tangibly affect your system's performance? I suppose it might, especially if you're looking for every last ounce of performance.
If you're more interested in pure bandwidth numbers, you're left only with turning up the FSB speeds as high as you can get them stable and setting the memory at whatever speed makes sense.