I was reading through the reviews and found this quote very interesting. Its from the Tech Report review.
This is something i noticed as well. Where is the huge but kicking over the GTX512 in games and apps that are clearly shader intensive? in past generations you can clearly see the advantages in fill rate and bandwidth over the previous "current crop" of cards when benchmarks are run.
In this case however the 1900's are in most cases barely eeking out 10 FPS wins against the GTX in many cases where shaders are heavy. Sometimes it even basically ties.
Why doesn’t it get a downright huge gain in edmark05 and 06 in the shader 2 and 3 tests? What about Shadermark? Why doesn’t it get a huge gain is source? Or HDR rendering?
Any ideas?
The Radeon X1900 cards are significantly better performers than the Radeon X1800s that they replace. I am a little bit perplexed, though, by ATI's choices here. They've tied up roughly 60 million transistors and 50 square millimeters of die space on the R580 primarily in order to add pixel shading power, but even in ShaderMark, we didn't see anything approaching three times the performance of the R520. Would this chip have performed any differently in any of our benchmarks with just 32 pixel shader units onboard? Seems like it is limited, in its present form, by other factors, such as pixel fill rate, memory bandwidth, or perhaps even register space. Who knows? Perhaps the R580 will age well compared to its competitors as developers require additional shader power and use flow control more freely. I wouldn't be shocked to see future applications take better advantage of this GPU's pixel shading prowess, but no application that we tested was able to exploit it fully. For current usage models, NVIDIA's G70 architecture appears to be more efficient, clock for clock and transistor for transistor. Here's hoping that ATI's forward-looking design does indeed have a payoff down the road.
This is something i noticed as well. Where is the huge but kicking over the GTX512 in games and apps that are clearly shader intensive? in past generations you can clearly see the advantages in fill rate and bandwidth over the previous "current crop" of cards when benchmarks are run.
In this case however the 1900's are in most cases barely eeking out 10 FPS wins against the GTX in many cases where shaders are heavy. Sometimes it even basically ties.
Why doesn’t it get a downright huge gain in edmark05 and 06 in the shader 2 and 3 tests? What about Shadermark? Why doesn’t it get a huge gain is source? Or HDR rendering?
Any ideas?