Nvidia G-SYNC

Can someone explain why this is:

  • better than triple buffering
  • better than when a game developer implements their own frame-rate-sensitive rendering to maintain a given frame rate (something some console games do already, as I understand it)
 
Can someone explain why this is:

  • better than triple buffering
  • better than when a game developer implements their own frame-rate-sensitive rendering to maintain a given frame rate (something some console games do already, as I understand it)

There are 2 issues I think (vsync is enabled for the sake of discussion, I cant stand tearing):
  1. Presentation latency: If you just started transmitting the front buffer and the back buffer just finished rendering. You have to wait up to 16ms to present the next frame to the monitor.
  2. Presentation jitter: The variance of the delta between the time the frame represents (game time) and the time the frame is presented to the human(wall-clock time). Again, maximum up to around 16ms because of the previous point but minimum is zero dot epsilon. Perceived as jerkiness, I guess.

Triple buffering solves the "sync rendering with an integer fraction (60,30,15,...) of monitor HZ if V-Synced", but it adds up to a full monitor frame minus epsilon delay, and doesn't address the jitter. Of course if you are not a pro-xxxxtreemm-cyber-athlete in a competition you probably don't care too much about extra 16ms presentation latency but the jitter reduction might be noticeable. I doubt many games aim to sync render-time with presentation time while triple buffered and vsynced.

The jerkiness you address in your second point is generated by sudden changes in render FPS, where you turn around and go from 60fps to 34fps and our brain doesn't like that. This is not addressed by G-Sync, you are still free to frame-pace your rendering to the worst case.
 
What I'm getting at is why is this relevant to games when developers already know how to sync to presentation time?

Or is this just for lazy PC developers (or those encumbered by shit in the API) and gamers with cheap cards that can't maintain 30 or 60 fps?

The fact is, PC gaming is still driven by what developers do for consoles. Developers on (previous gen) consoles already have the tech to solve latency/tearing/jitter problems. How is this relevant, in new games/engines?

Are PC games stuck, unable to get at the information they need to adaptively hit presentation time?
 
Can someone explain why this is:

  • better than triple buffering
  • better than when a game developer implements their own frame-rate-sensitive rendering to maintain a given frame rate (something some console games do already, as I understand it)

Currently you have to sacrifice some effects to maintain 60fps. With this, you can render at 40 (or go down to 40 occaisonally) and it still looks good.
 
Look at it from a fundamental level:

Old style: you have a source with random sample rate, going into a sink with fixed sample frequency. No matter what kind of technology your taking about (sound, data transmission, spatial sampling, ...), basic signal theory will say that you're going to have artifacts because of it.

New: sink is synchronous to source. No such artifacts by definition. The beauty of this thing for Nvidia is that they have fundamental signal theory principles backing them up on this.
 
What I'm getting at is why is this relevant to games when developers already know how to sync to presentation time?

Or is this just for lazy PC developers (or those encumbered by shit in the API) and gamers with cheap cards that can't maintain 30 or 60 fps?

With a game that is entirely on rails and can predict with 100% accuracy what the player will do, you could solve this problem purely in development. However, I don't believe that is the case most of the time. As Fuboi pointed out, this is to handle latency loss when something happens that causes the framerate to drop or vary wildly.

I actually see this as something that is going to happen more and more as games move into the online domain. Particle effects from a large number of players can cause hugely varying framerate. To solve this as a developer, you would either need to gimp your graphics to the point that the worst case situation is viable (say 25 players all casting a spell with particle effects at once) or you need to find a way to change the impact particle effects have adaptively.

In single player games, there are still things that can cause difficult to predict framerate drops. For instance a fast scrolling camera while several explosions are happening and the player is doing something that generates particle effects. These are the moments you can least afford latency drops, but they are also the most likely moments to see them with vsync or triple buffering enabled.
 
New: sink is synchronous to source. No such artifacts by definition. The beauty of this thing for Nvidia is that they have fundamental signal theory principles backing them up on this.
Game developers are already synchronising, so there's nothing new here. Doing it in hardware is an alternative for the subset of games where it makes a difference: if you're running Titan, about 5 games at 1920x1080?
 
Game developers are already synchronising, so there's nothing new here. Doing it in hardware is an alternative for the subset of games where it makes a difference: if you're running Titan, about 5 games at 1920x1080?

You can try to synchronize, but as Carmack said, you have to sacrifice a lot of effects to get there. And yet, as TR benches show, there are all too many frames that stutter.
 
Game developers are already synchronising, so there's nothing new here. Doing it in hardware is an alternative for the subset of games where it makes a difference: if you're running Titan, about 5 games at 1920x1080?
You can't synchronize after the fact.

Scene 1: you're looking over a vast expansive field with waving grass, far away mountains, bear cubs frolicking around their mother. Millions of polygons with different shaders.

Scene 2: a troll jumps in front of the camera. The complexity of the first scene is occluded by the uniformity of the troll's skin shader.

Now swap between the two: how are you going to guarantee that both render in the same amount of time?

You can't. It is inevitable that you will have different render rates at different times: your source has an infinite sample rates. Your sink is not. Artifacts.

Unless you're willing to waste GPU performance to have a large enough guard band that you'll never ever go below the max refresh rate of your monitor, you're going to have them one way or the other.
 
In single player games, there are still things that can cause difficult to predict framerate drops. For instance a fast scrolling camera while several explosions are happening and the player is doing something that generates particle effects. These are the moments you can least afford latency drops, but they are also the most likely moments to see them with vsync or triple buffering enabled.
You don't have to predict anything. You can take the opposite approach: increase effects progressively (at 60 fps) until you hit your rendering-time budget.

There is nothing surprising to the game about the twin facts of an explosion and the camera moving fast at the same time.

It's just that game developers have got away with being non adaptive.

No different to game developers getting away with no anti-aliasing.
 
You don't have to predict anything. You can take the opposite approach: increase effects progressively (at 60 fps) until you hit your rendering-time budget.

There is nothing surprising to the game about the twin facts of an explosion and the camera moving fast at the same time.

It's just that game developers have got away with being non adaptive.

No different to game developers getting away with no anti-aliasing.

So your solution is to use say 25% of available graphics resources so that the one event that can happen 1% of the time in level X renders at 60 fps?

And you really think that is viable?
 
(BTW: I hate the so-and-so are lazy argument. The visuals of modern games are astounding. The tech behind it extremely complex. The amount of programmers slaving away on it in the hundreds, thousands. When somebody bring out the laziness accusation, Kruger-Dunning jumps to mind instinctively.)
 
Can someone explain why this is:

  • better than triple buffering
  • better than when a game developer implements their own frame-rate-sensitive rendering to maintain a given frame rate (something some console games do already, as I understand it)

I was disappointed by nVidia's handling of the triple-buffering question. They just dismissed it outright with a cursory statement about being "just another buffer to deal with". I believe the crux of it is that it adds latency and doesn't fully solve the judder issue. It only solves the frame rate drop when going under 60fps with vsync.

With respect to developers coding their own thing I think you answered your own question. A universal hardware solution is far superior to custom rolled solutions for every game that come with their own problems. Carmack is one of the biggest advocates of frame rate targets but talked about the limitations of that approach in Rage.
 
So your solution is to use say 25% of available graphics resources so that the one event that can happen 1% of the time in level X renders at 60 fps?

And you really think that is viable?
You aren't paying attention. There is no reason to go for broke on the first difficult frame when the effects step up a notch.
 
A universal hardware solution is far superior to custom rolled solutions for every game that come with their own problems.
I don't see a universal hardware solution here, do you?

Or are console games played on TVs irrelevant? And the games played on anything but a few models of NVidia GPU are irrelevant, too?
 
Back
Top