On TechReport's frame latency measurement and why gamers should care

Discussion in '3D Hardware, Software & Output Devices' started by Andrew Lauritzen, Jan 1, 2013.

  1. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    I mean frame rate as in average FPS.

    Let's say your typical frame takes 20ms, but you frequently have 100ms spikes. For the sake of argument, let's assume that AMD can completely solve the problem that causes this. Then each of your 100ms spikes is replaced by five 20ms frames.

    If those spikes happen to be somewhat frequent, this can translate into a measurable increase in average frame rate.

    That's the idea, anyway.
     
  2. caveman-jim

    Regular

    Joined:
    Sep 19, 2005
    Messages:
    305
    Likes Received:
    0
    Location:
    Austin, TX
    Right, so you'd decrease the difference between 99 and 99.9 times.

    How do you determine typical frame render time?
     
  3. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,436
    Likes Received:
    264
    Yes, the z buffer.
     
  4. silent_guy

    Veteran Subscriber

    Joined:
    Mar 7, 2006
    Messages:
    3,754
    Likes Received:
    1,379
  5. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Just add.. here's a run with my 7970 CFX .. King of chinatown and i have follow the exact same road of TR.. ( start benchmark > open the door > take the way on the right, continue through the market in direction of the crane > walk then a little bit on the left side )

    First graph is the frametimes render, second is the fps ( i see strange things there ), last is the rank or 99th percentile.

    1920x1080, all ultra, 4xMSAA, all setting enabled at max level ( include FXAA, bloom etc )

    (ofc, keep in mind the average frametimes is really different ( 7ms ) ) seriously i can live with a frametimes render like that.

    [​IMG]
    [​IMG]


    Now i have too different result on different place ( Club Vixen ). a bit worst.

    [​IMG]

    note the highest difference between max and lower render time is only of ~ 4ms..
     
    #65 lanek, Jan 4, 2013
    Last edited by a moderator: Jan 4, 2013
  6. almighty

    Banned

    Joined:
    Dec 17, 2006
    Messages:
    2,469
    Likes Received:
    5
    You can actually really reduce the latency on AMD cards with Radeon Pro
     
  7. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Sure, but mostly I was wondering whether solving the spikes problem might also result in measurably improved benchmarks for websites that do not measure frame times, only average FPS.

    I don't know, geometric mean? Or if there's a long "flat" region in the graph with a few spikes, the height of the flat region would be the typical frame render time.
     
  8. Gipsel

    Veteran

    Joined:
    Jan 4, 2010
    Messages:
    1,620
    Likes Received:
    264
    Location:
    Hamburg, Germany
    The interesting thing is that in the video he says the testbed was running with two GTX670 in SLI. It appears to me the uneven frame partitions they show as examples (like the just 10 pixel high intermediate frame in Sleeping Dogs or also in Unigine Heaven) were captured with that setup. Let's see what comes out in the next months from that when they compare different cards and setups.
     
  9. caveman-jim

    Regular

    Joined:
    Sep 19, 2005
    Messages:
    305
    Likes Received:
    0
    Location:
    Austin, TX
    By what means, framerate limiting?

    Sure, from a statistics PoV if you remove the long frame times you'll decrease the average, meaning a higher fps. The number of peaks removed will have a bigger effect on reducing the average, as well as the height of the peaks. Thinking of the TR skyrim case, if you removed the spikes to smooth it out it'll increase average FPS but the two cards were close anyway, I doubt it would make it a clear win still unless the smoothness was noted, now we're in the Post-average-FPS era.

    At some point it'll stop making sense to just test with vsync off, people will want to know how smooth the games are with vsync and frame limiting enhancements enabled. At that point it'll be interesting to see how much having a highly overclocked high dollar cpu is important, if we're not worried about bottlenecking maximum frame rate anymore.
     
  10. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,122
    Likes Received:
    2,873
    Location:
    Well within 3d
    It could also not have much of an effect once you average things out over time.
    The Borderlands 2 graph had spikes that were just as much longer than average as the successive frames were faster.

    As interesting as the frame latency debate is, it looks more like frame time consistency is one more data point for consideration. We haven't quite moved beyond getting good FPS, just yet.
    Getting a better handle on what counts as a perceptible variation in frame time would be helpful.
    Additionally, we may need an analysis to determine if other factors can contribute to the problem, and whether there is a noise floor of X ms where other random factors can obscure hitches like this.

    For example, would a lower-quality LCD with noticeable ghosting wind up smoothing some of the effect away because even if the frames were on time they wind up being mixed in with prior pixel state anyway?
    What percentage of users actually notice this?
     
  11. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    I agree, but the thresholds will definitely be different for different people, which is why the quartile graph that Scott uses is actually the most interesting IMHO. Time of 50ms is actually a pretty generous metric, as is 99% of frame times (at 60fps, that can still allow a spike of arbitrary magnitude every 2 seconds!).

    Personally, I find jitters fairly obvious - that is to say that when they are frequent enough the game appears to be running at a lower frame rate than benchmarking claims. HardOCP has mentioned this in a pile of their articles (i.e. "the numbers say this, but I can tell you this one feels a lot smoother") and lots of people have noted issues with crossfire/SLI which obviously have another magnitude of this problem.

    So yes, I definitely want to see some research into what are good thresholds and metrics, but the point in my post is really to note that "spikes matter because they affect the simulation/backpressure".

    It's not really possible to generalize to that level... it's all level of magnitude. "Most" people wouldn't notice or mind the odd small spike every minute or two, but ones that are every couple dozen frames start to significantly affect the perception of smooth motion. It'd be easy enough to make a simple program to show the visual effects of various performance profiles in practice - you could even have it pull in a FRAPS csv as the input :) Might be worth doing, but ultimately this is going to end up being more of a psychology study, and those are a bit tough to do properly.

    Yup, but that's because so does the performance! I'd be surprised if any "enthusiast" gamer (i.e. the type that would read these benchmarks anyways) wasn't well aware of the fact that perceived game smoothness in different locations is much more variable than FPS really seems to indicate.

    But certainly I expect this to shine a light on the fact that games are more variable from level to level (and SP vs. MP) than they even might be between games these days, so finding relevant tests for your usage model is going to be more important if you want to *guarantee* a good experience.

    Yes, you are measuring the intervals between frames that's the point. In fact that's the entire point in why it doesn't really matter where in the frame you measure, as long as it's the same place every frame (and then you difference with that same time stamp from the last frame to get the elapsed time). If you did anything else you'd drift away from wall clock time. So yeah, it really doesn't make a huge difference where in the frame you take your timestamp... everything from then until the exact same point in the next frame will be included in the delta.

    It isn't, which is why some (typically FPS) games force only a single buffered frame. Hurts throughput a bit but helps latency.

    If it's a CPU spike, it shouldn't change the average frame rate. Take a look at the example I posted again... assuming the GPU is never going completely idle during a spike, the average FPS will be measuring the GPUs throughput, and a long CPU frame will be perfectly offset by shorter frames following it. Of course if the spikes are long enough to starve the GPU entirely then it could improve average FPS, but only by a tiny bit (unless there are *tons* of spikes...).
     
  12. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    Unfortunately the ghosting on LCDs isn't as "nice" as the ghosting was on slow phosphor CRTs. On my older LCD, there was noticeable ghosting on high contrast edges. Unfortunately, the ghosting didn't leave trails of the edge, but dark trails as the pixels changed colors to black then back to the appropriate color. And this was on a monitor rated at 6ms gray-to-gray! My new monitor is much better even though it's rated at 8ms gray-to-gray.
     
  13. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I think he mean enabling dynamic v-sync.. who is basically a frame limiter yes, but attention, there's a tricks on some games, skyrim as example, you need mod a bit the ini too ( Delete iPresentInterval=1 from SkyrimPrefs.ini and save changes. Open Skyrim.ini and place iPresentInterval=0 in the bottom of the [Display] section and save changes. ( left v-sync on in game ))
     
    #73 lanek, Jan 4, 2013
    Last edited by a moderator: Jan 4, 2013
  14. snarfbot

    Regular Newcomer

    Joined:
    Apr 23, 2007
    Messages:
    575
    Likes Received:
    188
    So say for sake of argument the game in question was well optimized and the longest frame took 33 miliseconds to render. If you limited framerate to 30fps would the gpu drop frames or just idle in between less complicated frames?

    I ask because at least in an fps id rather have my view move a predictable distance per frame at a given mouse velocity than jump a bit as framerate varies.

    It reminds me of the stuff that id software and others are doing with variable resolutions depending on load. Would it be possible to make the engine render the remaining frame at half resolution after a specific interval elapsed say 16 miliseconds. That would be a killer feature for a graphics driver really, but they would prolly get accused of cheating benches.
     
  15. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    The GPU would sit idle if the app's CPU thread limited the rate at which frames were drawn to be less than the GPU's rendering rate.
     
  16. digitalwanderer

    digitalwanderer Dangerously Mirthful
    Legend

    Joined:
    Feb 19, 2002
    Messages:
    17,276
    Likes Received:
    1,788
    Location:
    Winfield, IN USA
    Thanks for posting this Andy, it's explaining a LOT to me about what's been bothering me about gaming on my 7950 lately. Remind me later to pick your brain to explain the hard bits pls. ;)
     
  17. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    Dont forget, whatever is the variation of frame rendering, they will never be the same... Its impossible to render a frame and then a second with the same time variation.... or you just render the same image again and again .
     
  18. XTF

    XTF
    Newcomer

    Joined:
    Jan 3, 2013
    Messages:
    27
    Likes Received:
    0
    Suppose the workload is CPU bottlenecked. It'd never see back-pressure (given enough buffering), so the frametimes it (and Fraps) see depend solely on the CPU part. Now suppose the first frame renders (the part after Present()) 'instantly' (1ms), the next frame renders very slowly (51ms) and this keeps on alternating. The on-display interval between frames obviously isn't smooth, but wouldn't your measurements still be smooth and thus not notice this problem?
    My point is that this total latency would also be interesting to measure. A game with lower latency might feel much better even if frame interval is equal (or worse).

    BTW, where does one report issues with Intel graphics (drivers)?

    Same question for AMD. It seems the normal 'feedback' form is ineffective.
     
  19. XTF

    XTF
    Newcomer

    Joined:
    Jan 3, 2013
    Messages:
    27
    Likes Received:
    0
    If vsync is enabled and rendering is fast enough there's a fixed display interval (8.3ms or 16.7ms) and the game can assume this fixed interval, this should result in ultimate smoothness (but not ultimate low latency).
     
  20. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    If I'm understanding you correctly, any variation like that will show up in the FRAPS timings. You seem to be presupposing that there's some part of the frame loop (i.e. "after Preset") that is not being timed. This is not the case... there's not a "begin/end" timer zone, there's a timestamp taken at the same place in the loop every frame and differenced from the last frame. Thus there's never any time that can be "lost" (else, like I said, the game time could drift from the true wall clock time) - it will always be represented in one frame or the next, and where you measure it "in the loop" doesn't really matter.

    If instead you're noting that there could be variation downstream of the game engine (say on the GPU) that could cause issues, then yes, I addressed that in the original post. But that's much less likely than spikes on the CPU, and it still doesn't change the fact that spikes shown in FRAPS are a problem.

    Sure, but this is largely orthogonal to the "smoothness" of motion, which is the topic of the thread.
     
    #80 Andrew Lauritzen, Jan 5, 2013
    Last edited by a moderator: Jan 5, 2013
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...