I just got to wondering with something NVidia said recently. Does anyone know - when you benchmark a GPU using 3d Mark 2003 v340 - comparing top end ATi and NVidia cards - do NVidia manage to offload more "traditional" GPU workloads onto the CPU than ATi do? IS there a way of measuring this?
As 3d Mark 2003 is GPU heavy - a big CPU might be running at times at less of full capacity. Has NVidia realised this and found a way to migrate some of the intended GPU workload to the CPU? When you run a test with ATi vs NVidia does the CPU reach higher utilisation benchmarking for NVidia or is it impossible to tell?
Do Futuremark have any position on IHV's altering the workload of what gets done by the CPU and what gets assigned to the GPU? If you could re-balance 5-10% of the workload if your CPU wasn't maxed - that might give you quite a nice edge I thought.
As 3d Mark 2003 is GPU heavy - a big CPU might be running at times at less of full capacity. Has NVidia realised this and found a way to migrate some of the intended GPU workload to the CPU? When you run a test with ATi vs NVidia does the CPU reach higher utilisation benchmarking for NVidia or is it impossible to tell?
Do Futuremark have any position on IHV's altering the workload of what gets done by the CPU and what gets assigned to the GPU? If you could re-balance 5-10% of the workload if your CPU wasn't maxed - that might give you quite a nice edge I thought.