Is the frame rate/time the one and only true measure of GPU performance?

Below2D

Veteran
Regular
So, I often see people trying to use metrics such as TFLOPs to gauge a GPU's performance. What I frequently hear is that compute is only one metric and doesn't consider factors such as bandwidth, fill rate, ROPs, etc. It's an important metric, but just a single one that isn't representative of the total performance of a GPU.

In this case, is the frame rate or frame time the only way to measure the scalability of a given GPU? Assuming all things equal in a closed system running the same game at the same settings:

GPU A: 30fps or 33.33ms per frame

GPU B: 60fps or 16.66ms per frame

Is it safe to say GPU B is twice as performant/fast as GPU A in this particular scenario?

If yes, how do pixels come into the equation? If GPU A runs a game at 1080p and 60fps while GPU B runs it at 4K and 60fps, is GPU B 2x faster than GPU A? It is pushing 4x as many pixel, 100% more per axis, but at the same rate. Obviously, downgrading the resolution to 1080p for GPU B might not make it run at 120fps aka, double the frame rate of GPU A. if such is the case, then do we simply rely on frame rate because resolution scaling is highly dependent on other factors, thus not a reliable method to determine GPU performance?
 
I'm not exactly sure what you're trying to ask here but frametime (or framerate) is a specific measure of how fast that hardware configuration takes to complete that particular software workload. If you change any variables in that entire mix then that what you are measuring also changes.

What I think you're getting at is we do infer from those specific results in making determinations of how sub variables (such a the GPU) might compare to others in broader sense but that really is just theory.

The reality is there is no such thing as a generic objective measure of universal GPU performance. That's something that fanboys may want to have to argue with but it doesn't actually exist.

I'm not to clear what you're referring to with the last point but in terms of inferring from data what we do know with confidence is that the GPU itself is not the sole determinant of how long it takes to render a single frame in games typically. Also that to double the frame rate typically is more reliant on other hardware variables (and even in software) compared to doubling the resolution.
 
Last edited:
The only thing you can say for sure is how long it takes GPU A and B to render the “same” workload. Specs like flops or bandwidth are proxies at best and very often aren’t correlated with performance and definitely don’t scale linearly with performance.
 
Ive found that framerate is a very reliable way to measure the size of my penis
and if you look in the games forum what are you playing now thread you'll understand why I'm currently playing Solitaire ;)
 
Back
Top