Shader Operations/s = Meaningful benchmark or not?

Acert93

Artist formerly known as Acert93
Legend
It was mentioned as the X360 MTV specs came out that Shader Operations per Second is a neat meaningly statistic because of architecture and counting methods...

Yet we keep seeing this number through about and it keeps growing from both sides like the darn trebles from Star Trek! (PR at its best!)

So, those with an insight into how shader ops are counted and how relevant they are to performance, please discuss this. Give us the low down. This is an effort to prevent the Killzone effect (i.e. Killzone has infiltrated almost every thread... and now shops, as Jaws called them, are doing the same!)

Anyhow, looking forward to some good feedback.

Ps- If they ARE a meaningful benchmark, this would be a good place to discuss what the chips really do in this regards. This subject is spread out all over the place right now.
 
To test the shader performance you have the run the gpu's through say at least 10 common used shaders and see what both can output to get some ideas.

We need a 3dmark for consoles! ;) just kidding

It's probably very hard to compare the consoles just by rough numbers
 
Like everything else unless the ops are comparable it's a useless measurement.

A 4 element dot product is not equivalent to a 2 way FMAD.

Unless you know what each of the ops are it's basically the same as 0 information.
 
In addition to the above, it's like counting 'Flops' without knowing whether they're 16 bit, 32bit or 64 bit precision flops etc...but anyway, FWIW,


1 shader op per cycle ~ 1 shader execution unit

1 shader execution unit ~ vector unit or scalar unit

e.g. ALU = 1 scalar unit + 4-way SIMD unit ~ 2 shader ops per cycle

It's simply a 'count' of the number of execution units, more specifically, shader execution units, AFAIK...
 
I really do think you can forget shader ops as a method of comparing architectures' performance.

Jawed
 
Shader OPs/second are about as helpful as a performance metric as FLOPs, MHz, and maximum memory bandwidth estimates... which is to say THEY MEAN ABSOLUTELY NOTHING IN REAL WORLD APPLICATIONS.

There are a lot of other things that can influence performance and these are the details we don't have much information on. Nothing is equal here and there are significant differences between both the XBox360 and the PS3 in their hardware architectures. System power is a conglomeration of many different elements, both hardware AND software... and a lot of people seem to have forgotten how important hardware efficiency and software tools are. Theoritically both machines are about the same power, but there are a lot of details that would indicate otherwise... the XBox360 is FAR more efficient and flexible than the Playstation 3 AND the XBox360 has FAR better software tools than the Playstation 3. If everything was equal (which they are not) the more efficient machine will be the more powerful, and if both machines are about the same level in power (which they are not) then the machine with the better software will be more powerful.

I would suggest everyone take their FLOPs, ShaderOPs, MHz ratings, and memory bandwidth and flush it in the closest toilet facility located near them.

The truth is in the details...

The GameMaster...
 
Ummm... GameMaster... show me the proof that the 360 tools are significantly better than the PS3 tools... especially with Sony going with a lot of open source standards and getting other companies helping out with their tools... Also, once again, prove to me that the 360 is more efficient... oh wait, you can't because they both aren't final spec correct? Please don't make assumptions based on one previous generation of machines... Remember, the PS1 was incredibly easy to program for.
 
Back
Top