Nvidia G-SYNC

If you're gaming with Titan, then which games is this relevant to?
I checked the review benchmarks of a Titan: none of the games tested reached min frame rates of 120Hz at 1080p with Ultra setting. (In fact, most don't even reach 60Hz.)
 
I don't see a universal hardware solution here, do you?

Or are console games played on TVs irrelevant? And the games played on anything but a few models of NVidia GPU are irrelevant, too?

Are we stuck in this moment in time? A universal solution is preferable today and in the future. A future in which it's very possible that this becomes standard tech that releases devs from an unnecessary burden.
 
You aren't paying attention. There is no reason to go for broke on the first difficult frame when the effects step up a notch.

Frame metering is always required. However, it's impossible to meter frames accurately since you can't reliably predict what's going to be rendered for the next frame, so there's always the possibility that you might go over the 1/60 second barrier. If you use double buffering, you'll end up with a frame taking 1/30 second to display, which will be noticeable. Therefore, you have to be conservative on frame metering to reduce the possibility of a frame going over 1/60 second. (Triple buffering makes it less severe but introduces additional latency)

With G-sync, this is no longer a problem because you are free to present a frame as 1/59 second or 1/58 second, instead of having to go for 1/30 second whenever you go over 1/60 second, so you'll see much less stutter.
 
I don't see a universal hardware solution here, do you?

Or are console games played on TVs irrelevant? And the games played on anything but a few models of NVidia GPU are irrelevant, too?
TVs regularly need new models whenever HDMI adds a new resolution + framerate mode.
Some years ago being able to natively display 24fps was all the rage - stutterfree Blurays!!! Aslong as your player and tv understood each other.

I dont see why this travesty has to continue, add arbitrary frame syncs and be done with this - for all time. If nothing else, Nvidia finally raises awareness that LCDs dont work like CRTs and dont need to be feed like them - and its all for the better if you dont continue to do so.

You can blame the whole industry for being incompetent enough to not realize this way earlier.
 
There's no prediction required. e.g. you render your explosion at 1/4 (or 1/16th, etc.) resolution when it starts and if the frame time impact is low, reduce the fraction. Even if the explosion takes 12 frames before it reaches "max IQ" on a high-end system, the low-end system hasn't stuttered at any point in the explosion.

The developer can benchmark the GPU (in game) to identify the starting fraction for this effect (e.g. 1/4 for Titan, 1/32 for some APU).

You'll always be leaving some performance on the table.

This is no different from how a 60Hz locked framerate always leaves performance (IQ) on the table, since there are large portions of any game that can render at more than 60 fps (because it's PC).

The point is, console developers are already using excess framerate to improve IQ, and they are reducing IQ to maintain framerate. They don't need hardware to do this. And, more fundamentally, this hardware makes absolutely no difference to console gamers because they won't have this hardware this gen. At the same time, these techniques will end up in PC games because of their console lineage (well, some PC games have been adapting IQ to maintain performance for more than 10 years).

You can argue the techniques are still crude, but so are low res textures, square heads, blocky/noisy shadows, fade-in vegetation etc.
 
There's no prediction required. e.g. you render your explosion at 1/4 (or 1/16th, etc.) resolution when it starts and if the frame time impact is low, reduce the fraction. Even if the explosion takes 12 frames before it reaches "max IQ" on a high-end system, the low-end system hasn't stuttered at any point in the explosion.

The developer can benchmark the GPU (in game) to identify the starting fraction for this effect (e.g. 1/4 for Titan, 1/32 for some APU).

You'll always be leaving some performance on the table.

This is no different from how a 60Hz locked framerate always leaves performance (IQ) on the table, since there are large portions of any game that can render at more than 60 fps (because it's PC).

The point is, console developers are already using excess framerate to improve IQ, and they are reducing IQ to maintain framerate. They don't need hardware to do this. And, more fundamentally, this hardware makes absolutely no difference to console gamers because they won't have this hardware this gen. At the same time, these techniques will end up in PC games because of their console lineage (well, some PC games have been adapting IQ to maintain performance for more than 10 years).

You can argue the techniques are still crude, but so are low res textures, square heads, blocky/noisy shadows, fade-in vegetation etc.

You seem to assume that the rendering time between frames are somewhat "smooth." (e.g. the time needed to render the next frame is in the neighborhood of the time needed to render this frame) Unfortunately, this is not always the case (and in some situation, rarely the case).

It can also be difficult to do scaling in small steps. Normally when you up or down resolution, the rendering time changes greatly, especially when your bottleneck is not fillrate related. It's much harder to scale when you are geometry limited or CPU limited.

Let's just assume that you have a perfect frame metering, for the sake of argument. It's still possible that your rendering might "miss" v-sync, because the timing of the monitor may not be aligned perfectly with your GPU. Then you still introduce stuttering because your frame rate suddenly drop from 60fps to 30fps. Triple buffering is the traditional solution for such problem, but G-sync is simply a better solution.
 
Jawed, you win, I concede: game developers are lazy. It's really all of them, except maybe Rage and Angry Birds. They should render everything with increasing levels of quality to ensure fixed frame rate, like console who have a single configuration, irrespective of the thousands of combinations of CPUs, GPUs, memory etc. So, please, start the crusade! I dare say: start a separate thread on it!

Meanwhile, as the world waits for your triumph in this matter, the pragmatics among us can use short term fixes, such as buying faster GPUs or buy monitors that solve the same problem in a more fundamental way.
 
I for one find this tech interesting. It is actually surprisingly late (I mean people talk about such things on forums for awhile now I should think.. although similar to mantle and lower level access to GPUs I guess :)). However as for the specific implementation (at least what they showed) - is it just me or in this video http://www.youtube.com/watch?v=NffTOnZFdVs&feature=player_detailpage although there is no tearing in the gsync monitor there is a lot more jitter? If you look at the columns specifically at least to me it seems as they are kind of vibrating.
 
That's stutter:

60, 60, 60, 25, 26, 27, 25, 25.5, 60, 60, 60

That was in ms/frame, not fps. As long as you don't take more than 30ms/frame, it can sync the monitor and the GPU perfectly. So to use your example,

if the frame timings are, 16, 16, 16, 25, 26, 27, 25.5, 16, 16, 16, it still works.

Damn this thread is full of stupidity.
I would have thought you would understand the pretty elementary sampling rate argument from silent_guy.
 
Jawed, you win, I concede: game developers are lazy.

I'm still not sure what Jawed is proposing to be honest. Whatever it is, it sounds far more complicated, expensive, non-deterministic and app specific than nVidia's straightforward solution. Seems like a pointless argument.
 
Or is this just for lazy PC developers (or those encumbered by shit in the API)

Considering Carmack seems to be very enthusiastic about this technology and has been a big proponent of the software solutions you're describing I'm not sure how you justify this statement. Mark Rein, Johan Andersson, John Carmack. All massively influential PC and console developers are behind this so it seems a bit absurd to suggest this is merely a crutch for "lazy" PC developers.

and gamers with cheap cards that can't maintain 30 or 60 fps?

Because people that don't want to spend $400+ on a GPU don't matter?

That's stutter:

Putting aside what rpg said about it being frame time rather than frame rate, the general point still stands. Your frame rate can can vary all over the place (staying over 30fps) with no noticeable stutter. No complex metering required, no performance left on the table. It's all done for you automatically in hardware. That's obviously a clear advantage.

I don't see a universal hardware solution here, do you?

Or are console games played on TVs irrelevant? And the games played on anything but a few models of NVidia GPU are irrelevant, too?

Why would Nvidia or Nvidia (Kepler) gamers care whether this solution applies to console or any other gamers? The solution is universal in that it applies to every game you currently or ever will own as long as you own the solution. If you don't own the solution, live with your stutter and be happy.

If you're gaming with Titan, then which games is this relevant to?

The vast majority that were released in the last 5 years if you want to game at 4K. Or just max settings, 1080p, good AA and 3D. Or any other combination of image quality/resolution that prevents you maintaining 60fps 100% of the time.
 
Wrt triple buffering, the way I understand it:

in double buffering with vsync, if you can keep up with 60fps, the games will be rendered at 60fps and the game engine's internal simulation timer will be locked to it -> smoothness, but guaranteed 16ms lag: even if your GPU is done in 1ms, you'll need to wait 15ms for the next refresh to start.

With triple buffering: the lag will be less, but the simulation time stamps can be spread all over the 16ms of the frame -> lower lag, but less smooth, since the delay between internal time stamp and pixel visible on screen is now variable instead of fixed.

At least in theory: according to many reports on the web, triple buffering led to increased lag is many cases, which is something I don't understand.
 
The beauty is you never have to worry about the topic ever again. Currently, I have normal desktop usage where I want vsync, casual games where I want vsync (depending on performance level), and competitive games where I don't want it.

Never having to concern myself with this and always getting perfect, low-latency frames would be wonderful.

My only concern is cost... hopefully that will come down as soon at it scales.
 
The vast majority that were released in the last 5 years if you want to game at 4K. Or just max settings, 1080p, good AA and 3D. Or any other combination of image quality/resolution that prevents you maintaining 60fps 100% of the time.

Or you want to display more than 60fps. A game can be considered about perfect if you have a 100fps average framerate, with dips into the 60s. A decade ago, that was cheap gaming, low end/mid-range hardware would give you these rates in the popular multiplayer games (quake 1/2/3, UT, HL1 mods) ; low end displays had a 85Hz refresh rate.

Hell I would love to have a try at Quake 3 on a 144Hz LCD, run at 1920x1080, 16:9 aspect ratio, highest AA/AF forced in the drivers.
Last time I played Quake 3 was at 800x600 to get the high refresh (120Hz), with 8xS AA / 16x AF. You can actually beat the final boss this way.
 
I eventually ran CS at 100Hz vsync-on double buffer : if the 100fps are maintained at all times this gives you refresh heaven. Extremely fast/smooth, low latency and tearless.
In fact in you want a taste of G-sync all you have to do is to run an old game (like aforementioned Quake 3) that does not go below 100, 120 or 144fps on a 100/120/144Hz LCD.

Now G-Sync is interesting but it doesn't have to be a high end feature. Many people are playing Valve games, League of Legends, Starcraft II, old sutff. I want to see it on a $100 graphics card and a $200 monitor.
 
Last edited by a moderator:
Well, I'm bemused, people apparently want to spend money on getting one kind of stuttery, sub-60 fps framerate that's different from another kind of stuttery sub-60fps framerate they already have. Go ahead...
 
Well, I'm bemused, people apparently want to spend money on getting one kind of stuttery, sub-60 fps framerate that's different from another kind of stuttery sub-60fps framerate they already have. Go ahead...

Huh? All of the previews have been very clear on this point - stuttering is gone. I don't understand your argument?
 
Well, I'm bemused, people apparently want to spend money on getting one kind of stuttery, sub-60 fps framerate that's different from another kind of stuttery sub-60fps framerate they already have. Go ahead...
I dont know what people want, but I`d love a solution that makes horrible CRT legacy band-aids unnecessary and solves the problem where it can be handled best.
All while providing stutter-free sub 60 and above 60 framerates.

(I suppose something even better would be to leave the TV/Monitor to compose different display panes, like combining video feeds and ui with optimal scaling, de-interlacing and postprocessing)
 
Back
Top