V-Sync vs. FPS (revisited)

Chalnoth said:
Er, you get tearing all the time, no matter what your framerate or refresh rate are, if you are running with vsync disabled.
Agreed, and tearing SUCKS!!!!!

I don't think I play any games with v-sync disabled, I always force it on thru the control panel if the game doesn't have an option.

More D3D games need to support triple-buffering, although Ray Adam's tray tools does provide a great work-around. :)
 
Chalnoth said:
I wouldn't say so. Because it wouldn't have been purchased as such.
Well, it's down to semantics, but I personally wouldn't refer to a Voodoo 2 as a high-end card simply because at one time it was cutting edge, but there you go :)
 
Diplo said:
Well, it's down to semantics, but I personally wouldn't refer to a Voodoo 2 as a high-end card simply because at one time it was cutting edge, but there you go :)
No, I'd refer to it as a well-aged high-end card :)
 
Varg Vikernes said:
Now for the tricky part. There is no such thing as vsync in OpenGL. As I don't know much about OGL I found this comment from the lead programmer at Raven Software (SoF, Q4):
<snipped>
I beg to differ! There is VSync in OpenGL. I don't know about your lead programmer friend, how old that quote is, what context it came from, but I know that there is VSync in OpenGL because I have used it millions of times.
And glFlush/glFinish has nothing at all to do with VSync. It's definitely not usable as a replacement. That suggestion is just insane.
 
I used to think VSync was cool on my 9800XT when I could run everything 1280x960 4xAA 16xAF 100fps..

Nowadays, when I get only 40-50fps with 0xAA 8xAF, enabling VSync REALLY hurts performance.

No D3D Triple Buffering in Catalyst makes my troubles even worse. I just had to get used to tearing.. Hoping for ATI to provide the option to enable triple buffering for D3D as well as OGL so I can enable it once again.
 
There's WGL_EXT_swap_control and GLX_SGI_swap_control. They specify the minimum refresh cycles a frame is shown, and default to 1 which means vsync on. Curiously, the GLX version doesn't allow to disable vsync as it doesn't accept 0.
 
tahrikmili said:
I used to think VSync was cool on my 9800XT when I could run everything 1280x960 4xAA 16xAF 100fps..

Nowadays, when I get only 40-50fps with 0xAA 8xAF, enabling VSync REALLY hurts performance.

No D3D Triple Buffering in Catalyst makes my troubles even worse. I just had to get used to tearing.. Hoping for ATI to provide the option to enable triple buffering for D3D as well as OGL so I can enable it once again.

To be honest, that probably won't happen ever in the Catalyst or the Detonator suite. Unlike OpenGL, I believe the number of frame buffers in Direct3D has to be specified by the game or application on startup. In OGL, it's the driver's call. This is why ATi - and now nV as a few releases ago - include triple buffering in OpenGL as an option. There are third party programs which allow you the option to force triple buffering in D3D. They essentially pass the startup parameter(s) to allow triple buffering. ATI Tray Tools and DXTweaker from one of our forum members whose name I suddenly forgotten (Demirug I think?) comes to mind. I believe natively including this option would prevent or disallow WHQL certification for ATi or nV. Don't quote me on the WHQL statement or the author of the DXTweaker tool though. :)
 
SmuvMoney said:
This is why ATi - and now nV as a few releases ago - include triple buffering in OpenGL as an option.
Just note that nVidia doesn't list triple buffering as an OpenGL-only option, but as a global driver option. I haven't tested it, however. Maybe I'll do it tonight.
 
I have, at least for me it did nothing in D3D games. Which makes sense, as from what I understand D3D and the application determain how many back buffers are to be used, so driver doesn't get any say in that.
 
Chalnoth said:
Just note that nVidia doesn't list triple buffering as an OpenGL-only option, but as a global driver option. I haven't tested it, however. Maybe I'll do it tonight.

Chalnot the thing you said about the extra memory fotprint for tripplebuffering, is this something to consider more on cards with 128MB of memory? How much more memory/BW does it use compared to if you uncheck this feature in the FW drivers, significant?
 
Well, first of all, the memory bandwidth hit is basically zero (provided the hardware can make use of buffer swappin).

The extra storage for the frame that is done rendering, but is waiting for the next refresh cycle, does require any memory bandwidth (it's just sitting there).

So, the only cost is in memory space. And if the hardware does the downsampling at buffer swap (i.e. so that the front and middle buffers are both downsampled), then, for example, a 1600x1200x32 buffer would only cost about 7.5MB. So the extra memory hit isn't that big any longer, either.
 
Chalnoth said:
Well, first of all, the memory bandwidth hit is basically zero (provided the hardware can make use of buffer swappin).

The extra storage for the frame that is done rendering, but is waiting for the next refresh cycle, does require any memory bandwidth (it's just sitting there).

So, the only cost is in memory space. And if the hardware does the downsampling at buffer swap (i.e. so that the front and middle buffers are both downsampled), then, for example, a 1600x1200x32 buffer would only cost about 7.5MB. So the extra memory hit isn't that big any longer, either.

If we take 1024x768 with 4AA for ex then? And is this compared to "normal" doublebuffering then or is that 2/3?
 
Well, AA makes no difference, provided the downsampling is done at buffer swap. You can do the calculation for 1024x768 yourself. It's just 1024x768x32 / 8 (dividing by 8 to convert bits to bytes).

And yes, this is the difference from double buffering. It's obvious, really: it's just one extra buffer. So it's just the number of pixels (ex. 1024x768) times the bytes per pixel (4 bytes in the case of 32-bit color).
 
Chalnoth said:
Just note that nVidia doesn't list triple buffering as an OpenGL-only option, but as a global driver option. I haven't tested it, however. Maybe I'll do it tonight.

Does the help or mouseover tooltip for that option specify whether or not it is for OpenGL only?
 
Back
Top