Hercules 9700 Extreme Overclocking - BS?

Nagorak:

Your example doesn't really make sense. You have to ask yourself "What is 400MHz?" To a GPU, time doesn't mean anything unless your transistors can't work fast enough and you get logical errors (which would likely cause artifacts or lockups). When the GPU gets data to render, it takes a fixed number of cycles to render it, regardless of clock speed (I'm assuming fixed core/mem clock ratio).

There is no majical threshold where suddenly "choking" goes away. If the CPU is fast enough, and AGP isn't a factor, then 200MHz core / 200MHz mem will be exactly twice the speed of 100 MHz core / 100 MHz mem, assuming latency settings are the same number of cycles. This is because the number of cycles to complete a given frame stays fixed. However, in reality, the CPU will limit things some times, you don't keep the mem/core clock ratio constant, and higher clock speeds usually need higher latencies. These make things LESS than ideal.


demalion:

Better than linear may be possible, but statistically speaking that possibility won't arise in the lifetime of the universe, since a benchmark will undoubtedly average out any of these asynchronosities. Benchmarks are quite diverse in their data, especially from the point of view of how they repeat a particular sequence that would cause the situation you mentioned. Furthermore, graphics cards are full of FIFO's to buffer these things out.

For a 2000 frame benchmark running at 60fps @ 1600x1200 on a standard 9700 PRO, we're talking about 11.5 billion pixels (assuming 3x overdraw), 10.8 billion cycles, up to 5 trillion bits of data from the memory. Now these numbers aren't on the order of moles in chemistry, but if you've done any statistical mechanics you'll know that even a few percent deviation is absolutely out of the question. It would be very difficult for you to even engineer a situation like you described.

Like Humus said, better than linear is simply out of the question. Something fishy is going on at Digit-Life.
 
Mintmaster:

Well, we aren't talking about random events but a repetitious task determined by an algorithm that would have some fixed characteristics. Simply counting pixels does not relate to the probability. Look at that list of factors I said could fit into the scenario again, and I think it clarifies how your pixel and data counts aren't meaningful to reflect the probability by itself. This isn't occuring purely on the CPU or GPU, but in a system with some possibly fairly complex interactions and where synchronization is a definite issue.

That said, I think it is more likely something is "fishy" with the article personally, but I simply don't agree the possibility of validity can be dismissed as easily as it has been.
 
Back
Top