Mintmaster
Veteran
Nagorak:
Your example doesn't really make sense. You have to ask yourself "What is 400MHz?" To a GPU, time doesn't mean anything unless your transistors can't work fast enough and you get logical errors (which would likely cause artifacts or lockups). When the GPU gets data to render, it takes a fixed number of cycles to render it, regardless of clock speed (I'm assuming fixed core/mem clock ratio).
There is no majical threshold where suddenly "choking" goes away. If the CPU is fast enough, and AGP isn't a factor, then 200MHz core / 200MHz mem will be exactly twice the speed of 100 MHz core / 100 MHz mem, assuming latency settings are the same number of cycles. This is because the number of cycles to complete a given frame stays fixed. However, in reality, the CPU will limit things some times, you don't keep the mem/core clock ratio constant, and higher clock speeds usually need higher latencies. These make things LESS than ideal.
demalion:
Better than linear may be possible, but statistically speaking that possibility won't arise in the lifetime of the universe, since a benchmark will undoubtedly average out any of these asynchronosities. Benchmarks are quite diverse in their data, especially from the point of view of how they repeat a particular sequence that would cause the situation you mentioned. Furthermore, graphics cards are full of FIFO's to buffer these things out.
For a 2000 frame benchmark running at 60fps @ 1600x1200 on a standard 9700 PRO, we're talking about 11.5 billion pixels (assuming 3x overdraw), 10.8 billion cycles, up to 5 trillion bits of data from the memory. Now these numbers aren't on the order of moles in chemistry, but if you've done any statistical mechanics you'll know that even a few percent deviation is absolutely out of the question. It would be very difficult for you to even engineer a situation like you described.
Like Humus said, better than linear is simply out of the question. Something fishy is going on at Digit-Life.
Your example doesn't really make sense. You have to ask yourself "What is 400MHz?" To a GPU, time doesn't mean anything unless your transistors can't work fast enough and you get logical errors (which would likely cause artifacts or lockups). When the GPU gets data to render, it takes a fixed number of cycles to render it, regardless of clock speed (I'm assuming fixed core/mem clock ratio).
There is no majical threshold where suddenly "choking" goes away. If the CPU is fast enough, and AGP isn't a factor, then 200MHz core / 200MHz mem will be exactly twice the speed of 100 MHz core / 100 MHz mem, assuming latency settings are the same number of cycles. This is because the number of cycles to complete a given frame stays fixed. However, in reality, the CPU will limit things some times, you don't keep the mem/core clock ratio constant, and higher clock speeds usually need higher latencies. These make things LESS than ideal.
demalion:
Better than linear may be possible, but statistically speaking that possibility won't arise in the lifetime of the universe, since a benchmark will undoubtedly average out any of these asynchronosities. Benchmarks are quite diverse in their data, especially from the point of view of how they repeat a particular sequence that would cause the situation you mentioned. Furthermore, graphics cards are full of FIFO's to buffer these things out.
For a 2000 frame benchmark running at 60fps @ 1600x1200 on a standard 9700 PRO, we're talking about 11.5 billion pixels (assuming 3x overdraw), 10.8 billion cycles, up to 5 trillion bits of data from the memory. Now these numbers aren't on the order of moles in chemistry, but if you've done any statistical mechanics you'll know that even a few percent deviation is absolutely out of the question. It would be very difficult for you to even engineer a situation like you described.
Like Humus said, better than linear is simply out of the question. Something fishy is going on at Digit-Life.