I for one would love a benchmark that could tell you how often and on what workload / shader mix ATi and NVidia are able to run at high parallelisms in the execution units of their GPUs.
Imagine a benchmark that relfects either current or possible future games workloads that told you not only overall min and max fps, but at what load over time the GPU pipelines are operating at.
So for instance in Doom 3 on NV30 it might tell you in timedemo 99 the GPU was averaging at 80% parallel processing units busy indicating its well suited and the Drivers optimisations are great - whereas if ATi only achieved 35% maximum parallel loading of its GPU's internal execution units this would tell you they have either an inherent architecture problem with the game or a driver optimisation issue that may be correctable.
Would such a thing be 1) possible (maybe only by the ATi / NVidia and them maybe only with a chip redesign to monitor loading of GPU units) 2) likely to be created outside ATi or NVidia by an independent trusted advisor?
Really I want to know much more precisely how good or poor shader -> API -> driver optimisation is today and how much potential for future improvement exists.
Imagine a benchmark that relfects either current or possible future games workloads that told you not only overall min and max fps, but at what load over time the GPU pipelines are operating at.
So for instance in Doom 3 on NV30 it might tell you in timedemo 99 the GPU was averaging at 80% parallel processing units busy indicating its well suited and the Drivers optimisations are great - whereas if ATi only achieved 35% maximum parallel loading of its GPU's internal execution units this would tell you they have either an inherent architecture problem with the game or a driver optimisation issue that may be correctable.
Would such a thing be 1) possible (maybe only by the ATi / NVidia and them maybe only with a chip redesign to monitor loading of GPU units) 2) likely to be created outside ATi or NVidia by an independent trusted advisor?
Really I want to know much more precisely how good or poor shader -> API -> driver optimisation is today and how much potential for future improvement exists.