Can you calculate REAL WORLD Specs with any accuracy?

An nVidia guy (David Kirk?) was saying that WGF2.0 concerns the API, no tht ehardware implementation, so the same commands are applied to pixels or vertices. The drivers then despatch those instructions to the hardware as needed, so you don't need US hardware to run a US API.
 
Getting back to the original point, I'm not actually sure why there is so much concern over "real world" versus "theoretical" numbers.

I mean obviously some people are uncomfortable thinking that in some way a theoretical number is being dishonest when you won't see that performance in reality.

But I find that kind of thinking a little bit pointless. The numbers are not provided so that you can be certain that every game you play will feature x-million polygons, the numbers are provided you give you a rough idea how powerful some bit of hardware is, relative to it's competitors and it's predecessors.

Certainly we should treat these numbers with healthy scepticism, and analyse and disect them so we understand where they come from. But there is nothing inherently wrong with most of them, so long as we understand them in context.

In relative terms, provided the numbers have more or less the same maths behind them (even if that maths is flawed!) then they'll give some kind of valid basis for comparison. It is then up to us individually to decide which (if any) aspects of a platform really make it more interesting than another, and so rate accordingly. Personally for me that's not a purchasing decision (I'll base that on my usage of the machine, not it's specification) but more a case of wanting to know which areas I need to put effort into, in order to make my code look as good as possible in comparison to stuff on other platforms. How do I hide a slightly overall deficit in the number of polygons? Or make it look like I'm burning more fillrate than I really am?

The "real" numbers will vary for every game. They'll get better as time goes on, but I'd suggest that most games don't even try to shoot for the maximum possible, but simply try to get reasonable performance for a given set of art-assets. They may be "real" but they have little relationship to the potential of a particular bit of hardware. Games will vary wildly from scene to scene, let alone between titles.

We're also almost certainly now into the territory where the number of polygons or pixels being pushed is well beyond the abilities of the average person to count as they're playing something. The key is how well they are used, and no benchmark will tell you that. A title might use it's poly budget on lighting effects (rendering more stencils or shadowmaps) rather than higher-res models. That might make it have a totally different polygon throughput to another title which does things differently. One may look way better than the other even though they have similar performance, or they may look the same but use very different amounts of power. Again, it's all about understanding the potential and rather than predicting the result in great detail.

Knowing that an average title uses n million polygons tells me nothing about what to aim for in my own. Should I just shoot for average? How much more mileage is there if I try to do better? I'd rather shoot for the theoretical, and then develop an understanding for why I'm not achieving it.

If a performance level is doable, even in a contrived tech-demo, then that number is "real" enough to me. If it's marketing speak with no direct relation to the hardware then fair enough, please keep your numbers to yourself and let me read the manual... Otherwise I'd like to know what the hardware is truly capable of, not what someone thinks might be average in a title.

So I'm actually far happier with hearing what the theoretical numbers are, provided I also get to know exactly what calculation was used to generate them (no point comparing apples and oranges...). I know it may not correspond to anything I will see in my own code, but I can use them to judge for myself where the bottlenecks will be and which bits will need more optimisation than others. Or as an end-user I can take numbers which both describe the same thing and use them to decide which hardware has a potential advantage in a given area - though I think that's a poor way to make any kind of consumer related decision...

What they'll never tell me, whether they're "real" or "theory", is which console is better, which will have better games, or what the lottery numbers will be next week.
 
Back
Top