I'm not sure you do; the issue here isn't how we record the distribution of "fps" values (i.e. as a value in the video, in a table, etc), it's how we measure said "fps" values in the first place. The "dividing and deriving", as you call it, isn't some kind of extraneous filtering that's being imposed upon already-measured "fps" values; it's how "fps" is measured based on when frame boundaries occurred. You don't have the "fps" values without, as you call it, "dividing and deriving numbers."Ok I understand. But if your that focussed on the variations you probably want to avoid dividing and deriving numbers, instead you just publish the tables of values (e.g. 1fps to 30/60fps) and show how many how many seconds in the capture period were at 30fps, 29fps, 28fps and so on.
//===============
Let's suppose that over the last second, we had a new frame at 100ms, at 200ms, at 300ms, at 400ms, at 700ms, and at 900ms.
One way to calculate fps is to say that 6 new frames occured in the last 1 second, 6/1 = 6fps. This gives us the general rate over the second, but it doesn't clearly spell out that we recently had a big drop.
Let's use a "sharper" filter instead, and measure framerate over half-second intervals. If we decide to take a measurement at 500ms into the second, we note that 4 new frames showed up in that .5 seconds, 4/.5 = 8fps. We can also take a measurement at the end of the second; we note that 2 new frames showed up in that latter .5 seconds, 2/.5 = 4fps. Now, using this "sharper" filter, we can see that the framerate was higher in the first half of the second than the second, and now have a clearer view of the performance drop.
This is what I'm talking about.
Last edited: