Originally Posted by Andrew Lauritzen
No argument with any of that but don't see how it relates to whether you report frame latency-based measurements or FPS. I'm saying that those gamers who 'want 90' (and I personally always aim for 100/10ms with my stuff
) would be just as better served by saying they want 99% of the frames faster than 11ms or similar. This is mostly orthogonal to vsync.
Yes, I concur. What 99% doesn't show is uneven frame render times inside that target time. If you're aiming for 10ms a variance of 40% / 4ms (e.g. a section of game performs as 10ms, 10ms, 10ms, 6ms, 6ms, 10ms, 6ms, 8ms, 10ms but overall reports 99% time 10ms) might not be perceptible but at 22ms a change like that might be (a section performs as 22, 22, 13, 13, 13, 22, 22, 22, 13, 13, 17, 22, 22, 13, 13, 22, 22 but overall the 99% time is 22ms). If you were looking for a 45fps performance baseline the 99% time of 22 would satisfy you but the experience wouldn't because of the variation inside the 99% time.
I wonder how power limiting technology will affect gaming performance, right now GPU's clock down under full load to stay in TDP (yeah I know, the message is they turbo when load is light, same difference). Frame rate limiting and vsync leave TDP on the table, I wonder if the next step is power aware geometry/AA/compute
... this may be off topic for this discussion though.