Defining framerates and methods of describing framerate fluctuations *spawn

FPS means "frames per second", which is merely a rate, not "number of frames counted during a 1-second sampling interval."

I think this causes quite a bit of confusion. I don't think I've ever seen a real time frame counter that simply displayed the number of frames that occurred during the last whole (none rolling) second long interval (be it DF or anywhere else), but that's probably what a lot of people would be expecting. Moving beyond frame rate for a moment ...

I've been thinking about ways you could try and express other important information in a digestible form to get across information that would otherwise require looking at the complete sequence of frame times and frame count.

I was wondering if you could develop a system of comparing frame time variance in subsequent frames in such a manner that variance always decreased a single, easy to scan value for consistency. You could then calculate it either for an entire clip or as a rolling mean to to show if a video recording was in the middle of a rough patch. You could then compare different sections of the game or even different games to see how they holp up in terms of consistency.

I think it could also be interesting to look for patterns in inconsistency, to try and indicate how noticeable inconsistency might be; useful perhaps for people with different levels of tolerance for regular hiccups as opposed to unpredictable changes.
 
No. A rate and an average are very different things. A rate can be, as used here, a measure of something (unique frames) against another measure (60 samples). An average is distilling a single value from a list of values. There is no list of values, there is just one: the unique number of frames.
I see where you're coming from, and I agree with the logic of what you're saying, but I'm not sure it's exclusively correct. When taking 60 frames and dividing by number of seconds with a singular second as you suggest, that's not a (mean) average as it doesn't involve the number of samples. An average would be derived from taking the 'framerate' of each frame (so 60 fps for each 1/60th second frame, 30 fps for each doubled frame, 20 fps for each tripled frame) and then dividing by the number of samples. This would be a Weird Number. For one second of 60 Hz footage with one frame doubled, you'd have :

((58 * 60) + (1 * 30)) / 59,
or 58 lots of 60 fps + one lot of 30 fps / 59 samples

For a final average framerate of 59.49. That's clearly a bit bonkers. However, if you do it with frame intervals:

(1/60 * 58) + (1/30 * 1) / 59

The mean average interval gives you an interval in seconds (0.0169...) equal to 59 fps.

If the number at the end is the same due to applicable, mathematical, I'm not sure it matters how one thinks about it. The real issue when this all started was how a framerate wasn't particularly descriptive of all the qualities of output especially when it can be a broad calculation of framerate that smooths over fluctuation. Although in the case of DF, they provide a graph of frame times so all the data is there in a visual form.
 
Whats up with all the mumbo jumbo?

Of course "fps" is not an average. But in gamer jargon, "fps" generally refers or implies a sustained rate where using average frame rate is perfectly applicable.
 
I always felt this was a good explanation on FPS benchmarking...

The Methodology of Frame Testing, Distilled

How do you benchmark an onscreen experience? That question has plagued graphics card evaluations for years. While framerates give an accurate measurement of raw performance , there’s a lot more going on behind the scenes which a basic frames per second measurement by FRAPS or a similar application just can’t show. A good example of this is how “stuttering” can occur but may not be picked up by typical min/max/average benchmarking.

Before we go on, a basic explanation of FRAPS’ frames per second benchmarking method is important. FRAPS determines FPS rates by simply logging and averaging out how many frames are rendered within a single second. The average framerate measurement is taken by dividing the total number of rendered frames by the length of the benchmark being run. For example, if a 60 second sequence is used and the GPU renders 4,000 frames over the course of that time, the average result will be 66.67FPS. The minimum and maximum values meanwhile are simply two data points representing single second intervals which took the longest and shortest amount of time to render. Combining these values together gives an accurate, albeit very narrow snapshot of graphics subsystem performance and it isn’t quite representative of what you’ll actually see on the screen.

FCAT on the other hand has the capability to log onscreen average framerates for each second of a benchmark sequence, resulting in the “FPS over time” graphs. It does this by simply logging the reported framerate result once per second. However, in real world applications, a single second is actually a long period of time, meaning the human eye can pick up on onscreen deviations much quicker than this method can actually report them. So what can actually happens within each second of time? A whole lot since each second of gameplay time can consist of dozens or even hundreds (if your graphics card is fast enough) of frames. This brings us to frame time testing and where the Frame Time Analysis Tool gets factored into this equation.

Frame times simply represent the length of time (in milliseconds) it takes the graphics card to render and display each individual frame. Measuring the interval between frames allows for a detailed millisecond by millisecond evaluation of frame times rather than averaging things out over a full second. The larger the amount of time, the longer each frame takes to render. This detailed reporting just isn’t possible with standard benchmark methods.
 
I've been thinking about ways you could try and express other important information in a digestible form to get across information that would otherwise require looking at the complete sequence of frame times and frame count.

I was wondering if you could develop a system of comparing frame time variance in subsequent frames in such a manner that variance always decreased a single, easy to scan value for consistency. You could then calculate it either for an entire clip or as a rolling mean to to show if a video recording was in the middle of a rough patch. You could then compare different sections of the game or even different games to see how they holp up in terms of consistency.

I think it could also be interesting to look for patterns in inconsistency, to try and indicate how noticeable inconsistency might be; useful perhaps for people with different levels of tolerance for regular hiccups as opposed to unpredictable changes.
Hmm. Variances of all sorts are very easy to capture in time-domain, the hard part for this sort of thing would probably be in deciding how to weight and distribute the impacts of various kinds of variances.

For instance, should the filtering note that fixed-interval stutters can be fairly annoying and weight such issues more strongly than stochastic/noisy/whatever issues? If so, it might actually make sense to create a function with values where frame time changes occur, and then take a fourier transform to look for clustered frequency components. Of course, massaging that sort of thing into something useful would get complicated, and not just in terms of separating frequency data into readable results; over what intervals do you bother FFT'ing, how do we choose to use the results even if they're manageable by themselves, etc.
There's a big question here of how crazy you might want such an analysis tool to be :D
 
Last edited:
I think they way PS360 (something or another, can't remember the name) did it on his Youtube videos was great. He had some rows of blocks indicating new or repetitive frames (as well as torn) represented as colors, showing the frametime in relative terms, but visually easier to read than just a frametime graph.
 
I see where you're coming from, and I agree with the logic of what you're saying, but I'm not sure it's exclusively correct. When taking 60 frames and dividing by number of seconds with a singular second as you suggest, that's not a (mean) average as it doesn't involve the number of samples.

I think we're still at cross purposes because I'm not suggesting "taking 60 frames and dividing by the number of seconds". If you data that can easily be recored and easily be expressed, there is no excuse for butchering it with unnecessary calculations.

An average would be derived from taking the 'framerate' of each frame (so 60 fps for each 1/60th second frame, 30 fps for each doubled frame, 20 fps for each tripled frame) and then dividing by the number of samples. This would be a Weird Number.

What is "the 'framerate' of each frame" ? This entire sentence make my brain want to implode ;) Then you use that 'D' word again.

Fundamentally we're in different places. I think you're still trying to distill framerate into a single figure, and my position is that this is unnecessary and really isn't representative whether you use mean, mode or median. I want to know how often, either absolutely or relatively, a game renders at 1fps unto 30/60fps. When DF mention things like we measured a drop to 17fps but don't tell you how often, that's useless. How about 18fps or 19fps? When you know the exact distribution you're better informed on the actual performance.
 
I think we're still at cross purposes because I'm not suggesting "taking 60 frames and dividing by the number of seconds". If you data that can easily be recored and easily be expressed, there is no excuse for butchering it with unnecessary calculations.
If you don't divide by the number of seconds, your result is literally not a measure of frames per second. The division by a number with unit "seconds" is what makes it "per second." Otherwise the number you get when you count a bunch of frames is just a number of frames.
 
I think we're still at cross purposes because I'm not suggesting "taking 60 frames and dividing by the number of seconds". If you data that can easily be recored and easily be expressed, there is no excuse for butchering it with unnecessary calculations.
My vernacular is wrong - I understand your sentiments.

What is "the 'framerate' of each frame" ?
It's an expression of the number of frames you would have displayed in one second given a frame interval for every frame in that second equal to the one frame one's talking about. It might not be a phraseology you like, but there's nothing incorrect about it. A frame displayed for 16.66666 ms is equivalent to 60 fps, 33.3333 ms == 30 fps, 50 ms == 20 fps, etc.

When function said, "dropping from 30 fps to 20 fps", he's saying, "the interval between frames drops from 33.333 ms to 50 ms," and, "the rate at which images are presented for display drops by 50%," but of course focussing on a very small time slice. That very small time slice isn't necessarily the best way to think of framerate, but all representations have their pros and cons. the aim of the stats should be to get the most pros and least cons with the numbers being used so they give an accurate (or inaccurate, if one's intending to mislead!) picture.

Fundamentally we're in different places. I think you're still trying to distill framerate into a single figure
No, because I'm not advocating a single figure. ;) People are using a single figure and I'm trying to explain why and how it can be used, but a single figure isn't a good way to cover all the detail.

I want to know how often, either absolutely or relatively, a game renders at 1fps unto 30/60fps. When DF mention things like we measured a drop to 17fps but don't tell you how often, that's useless. How about 18fps or 19fps? When you know the exact distribution you're better informed on the actual performance.
Right. But you can't do that with one number. I agree with that. However, before we can decide what numbers to present and how, we need to be reading from the same choir sheet regards where people are getting their numbers from and using them.
 
If you don't divide by the number of seconds, your result is literally not a measure of frames per second. The division by a number with unit "seconds" is what makes it "per second." Otherwise the number you get when you count a bunch of frames is just a number of frames.
Yes, although DSoup's point is more about the number of frames than the rate. Record a number of frames in one second and 'plot' that, not as a plot of rate but a plot of frame count. It's the same thing in the end, but he's wanting to nullify the time aspect.

It's an interesting discussion how different folk are approaching the same info. It's a shame we can't all meet up in a pub and have a fight over it.
 
It's an expression of the number of frames you would have displayed in one second given a frame interval for every frame in that second equal to the one frame one's talking about. It might not be a phraseology you like, but there's nothing incorrect about it. A frame displayed for 16.66666 ms is equivalent to 60 fps, 33.3333 ms == 30 fps, 50 ms == 20 fps, etc.
Ok, I get it but my thoughts on expressing any form of "average" have been said enough. That said I just think that the rate and pace in which frames framerates are generally too variable to express it as though the rate/pace trend is constant (or even predicable). I prefer to just accept it as chaotic.

When function said, "dropping from 30 fps to 20 fps", he's saying, "the interval between frames drops from 33.333 ms to 50 ms," and, "the rate at which images are presented for display drops by 50%," but of course focussing on a very small time slice. That very small time slice isn't necessarily the best way to think of framerate, but all representations have their pros and cons.

The explanation of functions statement is appreciated because that's been bugging me. And now I now longer have to send him a get well soon card for the brain aneurism I though he may have had! I agree with this. Looking a small slices is definitely worthwhile particularly if you're focussing on an anomaly (crazy low frame dip) in what is otherwise a largely stable distribution. But as general representation, it's better to focus on the whole distribution.

Right. But you can't do that with one number. I agree with that. However, before we can decide what numbers to present and how, we need to be reading from the same choir sheet regards where people are getting their numbers from and using them.

Samples 60 times a second (60Hz or 60 frames a second), I'd be content with the first 60 samples comprising second number 1 and the number of unique frames being recorded, the next 60 samples comprise second number 2 and the number of unique frames is recorded and repeat for however long.

The console and the person really has no concept of a second starting and stopping so there's no 'right' frame to start with, it's just a stream of frames and splitting them into 60 samples (one second) is only because Frames Per Second at the accepted norm for a frame rate. At the end you show the distribution showing the number of seconds the capture recored 1fps (one unique frame over 60 captures), 2fps (two uniques frames over 60 captures) and so on.

I've not bone of contention with mean, median and mode as well - these are interesting, just not consistently accurately representative.
 
Samples 60 times a second (60Hz or 60 frames a second), I'd be content with the first 60 samples comprising second number 1 and the number of unique frames being recorded, the next 60 samples comprise second number 2 and the number of unique frames is recorded and repeat for however long.

Nitpicky but, no point in taking 60 samples per second. You'd just take one sample over a full second and record the number of unique frames. 120 fps, 80 fps, 90 fps and 1000 fps would all return 60 unique frames per second if you were sampling in such fashion. Not a problem on consoles but would readily show itself with PCs.

You could probably produce the mean, coeffiecient of variation and standard deviation and produce minimum thresholds where you likely to see problems of stuttering or other issues, but I doubt there exists a "be all" metric to make it easy to parse out every frame rate issue due to the variability in how they are produced.
 
Nitpicky but, no point in taking 60 samples per second. You'd just take one sample over a full second and record the number of unique frames.

If you're only taking one sample over a second, how will you accurately record the exact number of unique unborn frames, be 25, 30. 52 or 60?

But I doubt there exists a "be all" metric to make it easy to parse out every frame rate issue due to the variability in how they are produced.

I posted a way to measure frame pacing earlier in the thread. And it can be done at the same time using the same sampling methodology as I outlined for recording frame rate distribution.
 
If you're only taking one sample over a second, how will you accurately record the exact number of unique unborn frames, be 25, 30. 52 or 60?
Dobwal's saying the second interval is the sample period, in which you make a frame count not capped to 60 fps. Rather than sampling 60 times a second and having a binary test 'new frame or not' per sample, perform a one-second long sample that accumulates the frames.
 
Dobwal's saying the second interval is the sample period, in which you make a frame count not capped to 60 fps. Rather than sampling 60 times a second and having a binary test 'new frame or not' per sample, perform a one-second long sample that accumulates the frames.

I'm intrigued. How would this work over the course of a second, i.e. what exactly is happening?
 
Consider a scientist wanting to determine buttercup density in a field. He decides upon a one metre measure. He can then either:

1) Subdivide that one metre into 100 smaller squares and mark if there is or isn't a buttercup present in each of those squares. Each small square would be a binary sample, with one hundred samples making a one-metre measurement that'll range from 0 to 100.

2) Count how many buttercups in the whole one-metre square. Each large square is one sample with a range of values from 0 to however many buttercups you can fit.​

Now replace the field with a video stream, and the buttercups with frames. The one metre measure is replaced with a one second measure. The scientist can now find frame density by either:

1) Subdivide that one second into 60 smaller intervals and mark if there is or isn't a new frame present in each of those intervals. Each small interval would be a binary sample, with 60 samples making a one-second measurement that'll range from 0 to 60.

2) Count how many unique frames in the whole one-second measure. Each whole second is one sample with a range of values from 0 to however many frames you can fit, which'll be 60 for TV-limited consoles and crazy high for uncapped PC.​
 
1) Subdivide that one second into 60 smaller intervals and mark if there is or isn't a new frame present in each of those intervals. Each small interval would be a binary sample, with 60 samples making a one-second measurement that'll range from 0 to 60.

Well done, you're describing the exactly the same approach as me. Grabbing 60 samples (frames) a second in order to achieve the granularity required in order to detect a maximum of 60 unique frames a second o_O
2) Count how many unique frames in the whole one-second measure. Each whole second is one sample with a range of values from 0 to however many frames you can fit, which'll be 60 for TV-limited consoles and crazy high for uncapped PC.

Yes, you're describing exactly what I described only you're changing the terminology I used. I used 60 samples (to describe the process of grabbing each frame for analysis). The result of that analysis is what I would call the measure (of unique frames per second) over those 60 samples. That measure is stored in the distribution table.
 
Well done, you're describing the exactly the same approach as me. Grabbing 60 samples (frames) a second in order to achieve the granularity required in order to detect a maximum of 60 unique frames a second o_O
Yes, that's the one you're using.

Yes, you're describing exactly what I described only you're changing the terminology I used.
No, methods 1 and 2 are different. Option 1, your option, cannot count more than 60 frames in a measure. It consists of lots of binary samples. Option 2 will count however many there are without any limits. Your method has 60 samples per measure. Dobwal's method (used by others here) consists of one 'analogue' sample per measure. Of course, on a console output they both have exactly the same result - dobwal only raised a fault in your specific description because it can't count PC framerates > 60.
 
No, methods 1 and 2 are different. Option 1, your option, cannot count more than 60 frames in a measure.

No, I'm using 60 because function was specifically talking about a "60hz display" (re-read the first post). There's nothing about the capture approach, or counting of unique frames, that precludes it from working at 1,000 samples a second.

You can't just move to a new ballpark and call foul.
 
Hey, from the TV signal restrictions on a 60hz mode, i arrived at a question any devs tested running their game in the "1080p24hz" mode? that mode is actually running the display at least 72Hz (3 copies of the input) or 120Hz (5 copies). Just that maybe 24hz input with higher refresh rate of 72hz, is better than 30fps with dropped frames because when the game drops from 30 the display does not update. The frame pacing changes so maybe for games that have issues locking perfectly at 30, can have better pacing at 24hz.
 
Back
Top