Synthetic benchmarks like GLBenchmark attempt in their own way to predict how future games might look like. No mobile ISV would be as insane to create an as demanding game today as GLBenchmark2.5 and have the majority of users/devices look at single digit framerates.
The real point here is that Adrenos in general do extremely well with highly complex shaders and start to wind back as complexity shrinks and that's probably due to their still shaky driver/compiler not allowing the GPUs to reach a higher potential the actual hw should actually have.
If you'd ask me as a user to measure two competing GPUs of any kind I would use the most tortering synthetic stress tests along with as many as possible real 3D games and definitely not the best case scenarios in those and from the entire crop of results I'd attempt to reach a conclusion. Each result will have its own merit; it just comes down to how well you're able to interpret them.
When a mobile device without vsync reaches in any sort of 3D application way above the vsync limit (typically 60Hz), then that performance overhead should be better invested f.e. in IQ improving features like multisampling and/or AF.
The unfortunate thing here is that the small form factor market and especially 3D for it is still too young. Games unfortunately don't have any benchmarking functions, as their results would be far more representative than a handful of synthetic benchmarks. Assume there would be a healthy collection of game benchmarks available would you look rather at those case examples where GPUs get an average framerate of way beyond 100fps or rather something that drives the tested GPUs to their edge with 30-20 or even less average framerates? Or better how would you suggest to measure and compare different GPUs in such cases?
The real point here is that Adrenos in general do extremely well with highly complex shaders and start to wind back as complexity shrinks and that's probably due to their still shaky driver/compiler not allowing the GPUs to reach a higher potential the actual hw should actually have.
If you'd ask me as a user to measure two competing GPUs of any kind I would use the most tortering synthetic stress tests along with as many as possible real 3D games and definitely not the best case scenarios in those and from the entire crop of results I'd attempt to reach a conclusion. Each result will have its own merit; it just comes down to how well you're able to interpret them.
When a mobile device without vsync reaches in any sort of 3D application way above the vsync limit (typically 60Hz), then that performance overhead should be better invested f.e. in IQ improving features like multisampling and/or AF.
The unfortunate thing here is that the small form factor market and especially 3D for it is still too young. Games unfortunately don't have any benchmarking functions, as their results would be far more representative than a handful of synthetic benchmarks. Assume there would be a healthy collection of game benchmarks available would you look rather at those case examples where GPUs get an average framerate of way beyond 100fps or rather something that drives the tested GPUs to their edge with 30-20 or even less average framerates? Or better how would you suggest to measure and compare different GPUs in such cases?