When we talk specs we typically only talk the specs handed out by the company manufacturing the hardware, but these specs are theoretical maximums.
For example, maybe a next gen console will be listed as being able to push 500 million ( random number ) polys per second. But we know full well that at 500 miullion polys per second were talking about nothing more than wireframe models. So the question becomes, how many polys can it REALLY push in a real-time gaming application. ( It's the only thihng that matters )
The same scenario applies to the often debated Unified vs Standard shaders topic. * When I refer to Unified/Standard shaders I mean the Xenos, and RSX. *
The RSX does have higher theoretical maximums but as we all know, standard shaders are inefficient, but HOW inefficient? ( ATI claims 50-70% efficient ). Meanwhile Unified shaders have lower theoretical maximums, but are ( we are told ) far more efficient ( apprantly 95-100% efficient ), meaning that unified shaders can come much closer to actually achieving their theoretical maximums.
The whole issue becomes one of efficientcy. I cannot say for sure but Unified shaders COULD wind up being far more powerful in REAL WORLD applicaton despite a lower theoretical maximim.
These scenarios apply to petty much any spec, so is there anything we can do? We are here at this forum to debate hardware, so being able to accurately gauge REALworld performance is extremely important.
Do we simply have to wait for devs to tell us the kind of real world performance they are getting in relation to the game they're working on?
For example, maybe a next gen console will be listed as being able to push 500 million ( random number ) polys per second. But we know full well that at 500 miullion polys per second were talking about nothing more than wireframe models. So the question becomes, how many polys can it REALLY push in a real-time gaming application. ( It's the only thihng that matters )
The same scenario applies to the often debated Unified vs Standard shaders topic. * When I refer to Unified/Standard shaders I mean the Xenos, and RSX. *
The RSX does have higher theoretical maximums but as we all know, standard shaders are inefficient, but HOW inefficient? ( ATI claims 50-70% efficient ). Meanwhile Unified shaders have lower theoretical maximums, but are ( we are told ) far more efficient ( apprantly 95-100% efficient ), meaning that unified shaders can come much closer to actually achieving their theoretical maximums.
The whole issue becomes one of efficientcy. I cannot say for sure but Unified shaders COULD wind up being far more powerful in REAL WORLD applicaton despite a lower theoretical maximim.
These scenarios apply to petty much any spec, so is there anything we can do? We are here at this forum to debate hardware, so being able to accurately gauge REALworld performance is extremely important.
Do we simply have to wait for devs to tell us the kind of real world performance they are getting in relation to the game they're working on?