AMD demonstrates Freesync, G-sync equivalent?

Considering that current implementations of G-Sync require 768MB of DRAM I'll say G-Sync is as good as dead now even if Freesync is substantially inferior.
 
Considering that current implementations of G-Sync require 768MB of DRAM I'll say G-Sync is as good as dead now even if Freesync is substantially inferior.

There is a VESA standard for variable VBLANK, which AMD is using for Freesync.
Wouldn't Nvidia have been aware of this while conceiving of or developing G-Sync?
If it was aware, what's the catch that would make Nvidia aim for custom hardware and larger on-board memory?
 
This is great; My question now is what monitors support the VBLANK standard?
Nvidia will need to prove G-Sync's worth now(if FreeSync is comparable).
 
This is great; My question now is what monitors support the VBLANK standard?
Nvidia will need to prove G-Sync's worth now(if FreeSync is comparable).

I could be wrong, but Vblank is still used for LCD as a standard.. ( since CRT time, but have been keeped on LCD monitors )
 
Last edited by a moderator:
I could be wrong, but Vblank is still used for LCD as a standard.. ( since CRT time, but have been keeped on LCD monitors )

To be more precise, it needs to support variable VBLANK, not just VBLANK (at least that's how I understood it)
 
Damn.. while I really liked the idea of variable sync, and something being done about this long standing problem, the g-sync implementation of it just uneccesary expensive and generally wrong in so many ways.

But I could imagine it's only working out of the box (ie on existing devices) on laptops / other integrated (and power saving aware) displays, not through hdmi/displayport.
 
There is a VESA standard for variable VBLANK, which AMD is using for Freesync.
Wouldn't Nvidia have been aware of this while conceiving of or developing G-Sync?
If it was aware, what's the catch that would make Nvidia aim for custom hardware and larger on-board memory?

I think they do. VESA standard that deals with variable frame refresh is embedded display port (eDP), but according to its spec eDP "was developed to be used specifically in embedded display applications." Other VESA standard, Direct Drive Monitor, is said to be able to workaround this eDP limitation. But whether the combination of both standard is feasibly doable, or whether it would end up any different that NVidia G-Sync is still a big question at this point. Competition is always welcome though.

NVidia G-Sync is more likely an evolution of genlock technology. They even had genlock add in board with the GSync brand at some point way back in FX 4000 days here.
 
If it was aware, what's the catch that would make Nvidia aim for custom hardware and larger on-board memory?
Either to circumvent some patent bobbing around out there, or else just to lock people in on a vendor-specific piece of tech that NV can overcharge for to buff revenue.

Like I have always said with all of NV's proprietary shit, we can do better without that crap. Proprietary only ever leads to headache in the PC space, ever. It's always been universally true. Always.

(Now someone will drag up some successful examples of proprietary to thump me in the head - lol. Windows itself would be a prime example I suppose! :D)
 
Apparently it is not like G.Sync at all, as it requires V.Sync to be active which means additional lag and the potential for cutting the number of frames to half.

Also In AMD's assessment, NVIDIA is doing it with a combination of variable refresh rate and triple buffering , which is not accurate as that would add even more lag. The AMD representative had another theory later on, but the whole thing doesn't appear to be well studied by AMD yet.



http://techreport.com/news/25867/amd-could-counter-nvidia-g-sync-with-simpler-free-sync-tech
 
Triple buffering doesn't have to add any lag at all if you only use the third buffer if you miss a vsynch (and in the event of a miss you actually lower lag compared to double buffering with vsynch on. :))
 
Apparently it is not like G.Sync at all, as it requires V.Sync to be active which means additional lag and the potential for cutting the number of frames to half.
With GSYNC the display update is of course also synced to the buffer flip. That's the whole point (to sync the display to the buffer flip and to have a variable spacing between display updates to do that) ;). So no, that would be no difference.

Edit:
And the "triple buffering" pertains to the external GSYNC board (which has and FPGA and 768MB of memory for some reason). I think his theory was that nV GPUs don't support the variable refresh rate out of the box and need that external board to construct the input data for the panel (for which one or even more frames get stored in that onboard memory). I have actually no better idea why nV needs such an expensive board.
 
Last edited by a moderator:
Isn't the "third buffer" in nvidias solution just at the monitor end of the display cable? At least from scott's description it sounds like the needed backup buffer for the case where the panel demands a refresh due to fading.
And if you don't want tearing you need to have vsync active one way or another - ie at some point you choose to start a (panel) refresh, and then you have to wait for that to finish before starting the next one, aka vsync. But as long as you can delay the next refresh you're not "cutting the number of frames in half", just because you missed the start-refresh window by a ms.
 
Isn't the "third buffer" in nvidias solution just at the monitor end of the display cable? At least from scott's description it sounds like the needed backup buffer for the case where the panel demands a refresh due to fading.
In that case you could simply send the same buffer from the GPU again. It's basically just the maximum vblank period the panel supports (which it can report to the GPU/driver). One doesn't need an additional buffer in the monitor for that.
 
But what is the large buffer in the gsync module doing then? Receiving the frame before the panel refresh? that also seems uneccesary (unless the gpu's display output isn't capable of the variable refresh which would seem strange.

But yes, you're right that 2 buffers should be enough for tear free operation and optimal sync. 3 buffers would allow starting on the next frame earlier (ie without waiting on an ongoing refresh to finish), but unless the minimum refresh rate is close (like 30hz) to the maximum it would not be of much benefit.

Any idea of the minimum refresh rate for the panels?
 
In that case you could simply send the same buffer from the GPU again. It's basically just the maximum vblank period the panel supports (which it can report to the GPU/driver). One doesn't need an additional buffer in the monitor for that.

You can't, because flipping can happen anytime during image transfer and ended up with tearing. The idea of G-Sync is -- it is the card that should be driving the monitor, not the other way around.


Edit:
Also what is the point of demoing variable frame rate using a scene that does not change its content at all (thus producing a constant fps.) Does not look like AMD is really proofing anything here?

Say the demo runs at constant 20 fps, laptop A set to refresh at 30 Hz and laptop B set to refresh at 60 Hz then the later will be smoother without the need for variable refresh rate.
 
Last edited by a moderator:
Psycho said:
But what is the large buffer in the gsync module doing then? Receiving the frame before the panel refresh? that also seems uneccesary (unless the gpu's display output isn't capable of the variable refresh which would seem strange.
Exactly what I said and also the theory of that AMD guy as I understood it. I don't grasp what nV does with an expensive FPGA and 768MB RAM there. It would be completely unnecessary just for GSYNC/variable vblank if the GPU would support a variable vblank output directly.
You can't, because flipping can happen anytime during image transfer and ended up with tearing.
In that case you simply delay the buffer flip just as with vsync. The same happens with gsync above the maximum refresh rate of the panel, btw. (this happens if a frametime is shorter than the minimum refresh interval of the panel, so above lets say 120 or 144Hz, otherwise the transfer of the frame is already complete before the next buffer flip). And if you don't like that minimal additional delay (1/144Hz <= 7ms because of the retransfer of the frame, it won't cut the framerate in half ;)) when dropping below 30Hz or 24Hz framerates (whatever the panel can do as minimum), just go triple buffer. But you don't want to go in that low framerate territory anyway. And ask yourself what gsync does if the framerate drops below 30Hz (the stated minimum). It has exactly the same problem that it needs to refresh the panel and can't accept a new frame during that time.
 
Last edited by a moderator:
In that case you simply delay the buffer flip just as with vsync. The same happens with gsync above the maximum refresh rate of the panel, btw. (this happens if a frametime is shorter than the minimum refresh interval of the panel, so above lets say 120 or 144Hz, otherwise the transfer of the frame is already complete before the next buffer flip). And if you don't like that minimal additional delay (1/144Hz <= 7ms because of the retransfer of the frame, it won't cut the framerate in half ;)) when dropping below 30Hz or 24Hz framerates (whatever the panel can do as minimum), just go triple buffer. But you don't want to go in that low framerate territory anyway. And ask yourself what gsync does if the framerate drops below 30Hz (the stated minimum). It has exactly the same problem that it needs to refresh the panel and can't accept a new frame during that time.

Ok, scrap what I said. Now I think its actually simpler than that. You can't do that because in case of G-Sync, the video card only keep the back buffer. And there can't be any VSync induced delay because that is one of the main selling points of G-Sync; to eliminate VSync input lag. And when the frame rate is higher than max panel refresh rate; the latest frame image is stored in the G-Sync memory module. No delay necessary.

I don't see how this zero tearing and zero VSync delay is simultaneously achievable in any other way.
 
Back
Top