Micro-stuttering and how to solve it

doesnt vysnc fix micro stuttering ?

Unfortunately no. :(

It should, as long as you're hitting a framerate that's consistently above the refreshrate of the monitor. If you're using double buffering there should also not be any micro-stuttering if you're going at less than the refreshrate, although you'll instead get the framerate capped very low of course. If you're using triple-buffering and the framerate is lower than the refreshrate you'll get micro-stuttering even on single GPU setups. Interestingly enough, I don't think I've ever heard anyone complain about that. Although in that case you'd see it jump between 16ms and 32ms, rather than say between 1ms and 31ms.
 
Humus, this is a problem I am very concerned about. I spent hours and hours trying to reduce their effect.

The issue is even more noticeable (if we discuss the same issue) when VSync is on, as frame time keep on changing from 1/60 to 1/30, producing horrible result.

The only way I found to reduce this (with vsync on) is to ensure you write/use the same render target over frame. Some other suggest using queries or locking a rendertarget, causing CPU and GPU to sync.

Moreover, one have to set the "pre-rendered frame" to 0 (NVidia cards, NVidia control panel).

With this method, framerate automatically drops to 30Hz instead of varying between 60Hz and 30Hz, producing much smoother results.

The disadvantage is that you theoretically lost some GPU power, as it gets idle sometimes.

Another issue is that it doesn't work with multi-GPU systems.

I have been in touch with some guys at NVidia, tell them my opinion that they should implement such a feature at the driver level. However, they suggested to render the scene to an offscreen render target and, according to a measured frametime, present this one or several time. computing the frametime is an issue however, as you don't know whether you are CPU or GPU limited.

It turns out to be a real chaos, and customer keep on asking : "why do I see 49 FPS and it looks jerky ? It looks smoother at 30Hz with present interval set to 2 ?! You cheat in your FPS counter !!"
 
The issue is even more noticeable (if we discuss the same issue) when VSync is on, as frame time keep on changing from 1/60 to 1/30, producing horrible result.

I'd be interested in seeing this scenario where it changes on a frame by frame basis. Framerate demultiplication with V-sync on is normal fare. Also, a difference of around 16ms in inter-frame delay would be...difficult to perceive by a normal human.
 
Not sure I understand correctly, my English is not that great ;)

Let's consider a frame that require 20ms to be computed by a GPU, vsync being on. This means, because of the driver/GPU being 2 or 3 frames ahead, that the frame will be displayed sometimes after 1/60s, sometimes after 2/60s.

The issue is, on the CPU this is not visible, thus the frame delta time will be 20ms, and all animations will be computed according to this time. So delta time is 20ms, but real frame is 16 or 33ms.

Believe me, effect is noticeable.

In that case, if you sync the GPU at the start of each frame and ensure it doesn't compute any frame ahead - actually buffer past frames (i.e. also losing any benefit of asynchronously processing) the framerate will be 30Hz, and the delta time computing on CPU for animation will be 1/30Hz, matching with the real frame delta time => smoothness impression

I guess this will be confusing for some of you, it is very hard to explain actually (and i am quite bad at explaining ;))

As someone said, it seems vendors are more concerned about FPS than about real smoothness feeling :(
 
gjaegy: that's a great explanation of something I've been unable to explain to people who can't see it. your english is fine :)
 
The issue is even more noticeable (if we discuss the same issue) when VSync is on, as frame time keep on changing from 1/60 to 1/30, producing horrible result.

If you use double-buffering instead of triple-buffering it should stick to 1/30 though.

Let's consider a frame that require 20ms to be computed by a GPU, vsync being on. This means, because of the driver/GPU being 2 or 3 frames ahead, that the frame will be displayed sometimes after 1/60s, sometimes after 2/60s.

The issue is, on the CPU this is not visible, thus the frame delta time will be 20ms, and all animations will be computed according to this time. So delta time is 20ms, but real frame is 16 or 33ms.

For the record, this problem is not limited to multi-GPU systems. The fundamental problem here is that we really want to know ahead of time how long time it'll take to render it. Since we can't really predict this an app will have to do with the last frame's value. But if it jumps up and down between 16ms and 33ms this indeed doesn't produce the right results. You could use a smoothed value for the animation though, like the average over the last second. So if the overall framerate is 50fps, you animate at 20ms. You'll still get some microstuttering, although less than if you end up using a 16ms frametime for a 33ms frame and vice versa.
 
Thanks you Humus for your suggestion.

I must ensure this suggestion doesn't distort time. I work on a product that is linked to a simulator, so it must be synchronized with it (I could use two time counters to be safe, a normal one and a "mean" one). As we use several PC to generate several part of the environment (imagine several projector put side-by-side) I must also ensure a proper sync between all PCs regarding animation.

A scheme where you sum up and mean the last X frames time should ensure this however (where using the last second FPS wouldn't).

Will give it a try.
 
You cant *see* microstuttering. You only get the impression of lower FPS.

Sure you can - if you perceive it as lower FPS, you see it.

There is a tool that simulates the problem on single-GPUs. I don't know if it was posted already but it's available here:

http://www.computerbase.de/downloads/software/systemprogramme/grafikkarten/mikroruckler-tester_single-gpu/

It adds a 15ms delay, which seems to be realistic judging from the frametime logs that have been posted. You can vary the target FPS and the delay time and switch between single-GPU and the simulated multi-GPU mode by pressing te space key.
 
Hi,

I'm curious, is this the same as "tearing" when the monitor refresh is not locked to the framerate (vsync)?

If so I have a question: I just installed my new ATI 4850 card and I'm running my game as usual at 75 fps and the LCD display at 75Hz and there is a noticable flickering band accross the screen.

I cant remember having this problem with my old NVIDIA card, and if I switch to the onboard Intel gfx the tearing is very, very small, just a slight displacement between the upper and lower portion of the frame.

Question is: could it be that the ATI card is producing more tearing than the NVIDIAs?

Thanks.
 
I'm curious, is this the same as "tearing" when the monitor refresh is not locked to the framerate (vsync)?

No.

Here is how I understand micro-stuttering:

It is about the uneven distribution time between frames. If you take 3 frames, in the ideal case, the time between frame 1 and 2 is equal to the time between frame 2 and 3. In the micro-stuttering case, typically the time between frame 1 and 2 is "N" units and the time between frame 2 and 3 is some some multiple of "N" units (such as 3N). The end effect is you get a fast update then a slow update then a fast update then a slow update.
 
beware that at 75Hz your LCD might be dropping frame to actually display only 60Hz, i.e. it's faking higher than 60Hz support.
(but your problem may be not related to this)
 
Back
Top