AMD demonstrates Freesync, G-sync equivalent?

There is a VESA standard for variable VBLANK, which AMD is using for Freesync.
Wouldn't Nvidia have been aware of this while conceiving of or developing G-Sync?
If it was aware, what's the catch that would make Nvidia aim for custom hardware and larger on-board memory?

Indeed. While competition is nice, this seems far from a commercial product at this stage (or even something ready to be released to review sites for testing) and thus it's far to early to start declaring Gsync dead in the water.

I'll wait until we have some reliable 3rd party comparisons before getting too excited. Or at least a more detailed technical understanding of how both options work.

That said, even if Freesync is a much inferior solution, if it's free and available to people without Nvidia GPU's then it's certainly eating into the value add of G-Sync.

Interesting that the consoles may also be able to benefit from it too - at least in theory. I expect we are a long way from that option ever materializing though (if ever).
 
With GSYNC the display update is of course also synced to the buffer flip. That's the whole point (to sync the display to the buffer flip and to have a variable spacing between display updates to do that) ;). So no, that would be no difference.
Pardon me, but I am not convinced they are similar at all.

And AMD keeps coming up with different theories doesn't paint them in any good picture, they seem to be not having a solid grasp on the situation.

Varying refresh rate at different intervals creates many problems for the current implementation of LCDs , color fidelity, gamma .. etc. So it is not as easy as flipping a switch, which is also the same reason for G.Sync repeating the previous frame beyond the 33.3ms interval delay limit.

The triple buffer thing doesn't seem convincing too, mainly because we still dont have a solid implementation in DX.

NVIDIA uses 3 memory modules with a total size of 768MB to maximise memory bandwidth at the display end.
 
I don't see how this zero tearing and zero VSync delay is simultaneously achievable in any other way.

vsync input lag? what is that? ;)

If we render slower than 16.7ms (and faster than minimum required by the panel), the gpu will be active all the time, and we will delay the refresh for every frame, and thus also present the frame as soon as it's ready. This can't be done any faster or more consistently.

If we render a lot faster, say 6 ms, you could, as you seem to assume start the rendering when you can do the swap, then delay the presentation of the rendered frame for 10ms - that would introduce some latency yes. But you could also delay the rendering by the 10ms, and then be done just about the right time - and if the frame takes slightly longer - no problem, there isn't a window to miss, we just present the frame half a ms too late.
We could also, as you seem to assume is the case for gsync, just render at full speed, ie start a new frame at 0, 6, 12ms and then use the most recent one (the one from 6ms!) when the panel is ready. This would be better at catching frame-spikes (resulting in the 0ms being used when ready, instead of having waited 10ms before even starting on it), but worse for the general smooth frame variation (coming from change in view direction etc).
But this could just as well be done with a 3rd buffer in video memory - still no need for the gsync module. Actually it's better to keep the buffer on the card, instead of using the comparable slow display link cable.
Also please remember that the consistency in the game simulation time (the time when we start to render the frame to be presented) is just as important as refresh rate consistency.

NVIDIA uses 3 memory modules with a total size of 768MB to maximise memory bandwidth at the display end.
Maximize for what? it only needs to write at the speed of dl-dvi or displayport, and the read at the speed of the panel-refresh (1080p*144hz ~ 1GB/s)
 
Last edited by a moderator:
And AMD keeps coming up with different theories doesn't paint them in any good picture, they seem to be not having a solid grasp on the situation.
What has been demoed is generating the same effects to the end user. Any theory generation in in wonderment of why G-Sync is going to the expense of adding this module in when the end result can be produced without it (if implemented in the GPU).
 
We could also, as you seem to assume is the case for gsync, just render at full speed, ie start a new frame at 0, 6, 12ms and then use the most recent one (the one from 6ms!) when the panel is ready.

This is the source of the difference in our understanding how G-Sync works. On fixed refresh rate system VBlank signal only happen when the display begin scanning the front buffer. But I am not aware of any way the display can tell the video card that it has finished refreshing. So do you know in fact that it is part of DP spec or you just need us to make this assumption in order to make your theory acceptable?
 
I think the DDR in the gsync module is there to act as an additional framebuffer so that the video card is essentially decoupled from the monitors refresh rate altogether.
 
Ok, scrap what I said. Now I think its actually simpler than that. You can't do that because in case of G-Sync, the video card only keep the back buffer.
So double buffering. Fine, no problem.
And there can't be any VSync induced delay because that is one of the main selling points of G-Sync; to eliminate VSync input lag.
Above the max refresh rate of the panel, GSYNC is behaving the same as a vsynced display. Very plainly, GSYNC just reverses the synchronization, the scanout is not synchronized to the fixed refresh of the panel, but the refresh of the panel is synchronized to the scanout (within the limits of the panel), which starts only after completing a frame. But it is still always synchronized.
And when the frame rate is higher than max panel refresh rate; the latest frame image is stored in the G-Sync memory module. No delay necessary.
It shouldn't matter too much, where this is stored. ;)
I don't see how this zero tearing and zero VSync delay is simultaneously achievable in any other way.
Very simple, add a variable vblank time (from zero to a panel specific maximum defining the minimum refresh rate) after the transfer of a frame und only start transmitting the next frame after the buffer flip (completion of the next rendered frame) on the GPU. If the maximum vblank time expired, transfer the old frame again (and optionally prevent the buffer flip during transfer to avoid tearing, adds a minor delay, but only when dropping below 30 or 24Hz [30Hz=>25Hz, 24Hz=>20,5Hz, maybe even less when transferring at highest supported DP speeds]). Hitting the maximum refresh rate basically enables classic vsync, same as with gsync.
[edit]
I just see, that GSYNC suffers from this exact problem as noted by pcper in the test of the Asus DIY kit (they see the 30=>25Hz jumping).
[/edit]

==========================

Pardon me, but I am not convinced they are similar at all.
So, what's the difference in your opinion?
And AMD keeps coming up with different theories doesn't paint them in any good picture, they seem to be not having a solid grasp on the situation.
Have you a solid idea, what nV is doing with that expensive FPGA and 768MB of RAM on a GSYNC module? It shouldn't be needed in my opinion (I stated that already right after the GSYNC presentation).
Varying refresh rate at different intervals creates many problems for the current implementation of LCDs , color fidelity, gamma .. etc. So it is not as easy as flipping a switch, which is also the same reason for G.Sync repeating the previous frame beyond the 33.3ms interval delay limit.
The panel driver electronics takes care of all of this stuff. You know, most panels support different refresh rates already. One "only" needs to add support (if it is not already there as with more and more panels in mobile devices) for changing the refresh period with each refresh (and not set it for longer time periods). Depending on the specific panel, this may be extremely simple (or not) and cheap (as it doesn't require additional chips or even FPGAs and large amounts of memory), but in any case it should be the task of the manufacturer of the panel. They should know best, what the panel can do and adapt the driving electronics to the panel anyway.
NVIDIA uses 3 memory modules with a total size of 768MB to maximise memory bandwidth at the display end.
Do you know for what the 768MB are needed in the first place?

========================

This is the source of the difference in our understanding how G-Sync works. On fixed refresh rate system VBlank signal only happen when the display begin scanning the front buffer. But I am not aware of any way the display can tell the video card that it has finished refreshing. So do you know in fact that it is part of DP spec or you just need us to make this assumption in order to make your theory acceptable?
The display doesn't need to tell the GPU that is has finished refreshing. It just needs to communicate the minimal and maximal allowed timings. This is basically done already. As long as the GPU adheres to this, it should work.
 
Last edited by a moderator:
I think the DDR in the gsync module is there to act as an additional framebuffer so that the video card is essentially decoupled from the monitors refresh rate altogether.
That only makes sense if the GPU doesn't support a variably timed output directly. That is basically what the AMD guy at CES suspected. In any case, it apparently results in quite a large waste of resources (~160mm² FPGA + 768MB RAM, >$100 costs [FPGAs are relatively expensive, the $100 assumes already a huge rebate]).
 
This is the source of the difference in our understanding how G-Sync works. On fixed refresh rate system VBlank signal only happen when the display begin scanning the front buffer. But I am not aware of any way the display can tell the video card that it has finished refreshing. So do you know in fact that it is part of DP spec or you just need us to make this assumption in order to make your theory acceptable?
ohh.. by ready I just mean that 16ms (ie max refresh rate) has passed since the last vsync/startrefresh signal was sent, so we can send a new one. Dunno shit about what's in DP spec regarding this :)

Btw, how fast is the panel actually refreshing internally? Is the monitor already in fact buffering what's coming over hdmi? (thinking about where these 30+ ms input lag on some LCDs comes from...). Because then the point of the gsync module could be to replace this suboptimal (and synced in both ends) buffering.

I got a feeling that gsync is trying to fix both the varying fps issue and lcd input lag, while freesync is only fixing the former (and the gaming oriented monitors are usually doing their own to minimize the latter)
 
Last edited by a moderator:
Do you know for what the 768MB are needed in the first place?

I believe his point is you need at least 3 modules to achieve the required bandwidth. The cheapest/smallest modules available are 256MB so you end with 3/4GB, not because you need that amount of memory, but because you need at least 3 modules.

http://www.anandtech.com/show/7582/nvidia-gsync-review said:
The G-Sync board itself features an FPGA and 768MB of DDR3 memory. NVIDIA claims the on-board DRAM isn’t much greater than what you’d typically find on a scaler inside a display. The added DRAM is partially necessary to allow for more bandwidth to memory (additional physical DRAM devices). NVIDIA uses the memory for a number of things, one of which is to store the previous frame so that it can be compared to the incoming frame for overdrive calculations

The modules are "h5tc2g63ffr 11c 334a" 128Mx16 DDR3L, normal comsumption, commercial temp range: http://www.skhynix.com/products/consumer/view.jsp?info.ramKind=19&info.serialNo=H5TC2G63FFR

I can't decipher the speed grade from the PDF, but assuming a 6byte 1600Mhz bus that would be around ~10GByte/s. You need to potentially read a frame while writing the next frame, so half that bandwidth, then max Hz is around 150 so you have around 30Mbytes/frame to play with, or ~10Mpixels at 3 bytes/pixel.
If they went for the shittiest modules and run them at DDR3-667 to save power and 5 cents you third the value to an uncomfortable 3Mpixels.
 
The display doesn't need to tell the GPU that is has finished refreshing. It just needs to communicate the minimal and maximal allowed timings. This is basically done already. As long as the GPU adheres to this, it should work.

ohh.. by ready I just mean that 16ms (ie max refresh rate) has passed since the last vsync/startrefresh signal was sent, so we can send a new one. Dunno shit about what's in DP spec regarding this :)

I kind of expected this response :smile: But my problem with this is, for example, on a 60 Hz rated monitor you would still stuck at 60 Hz top even if the monitor is set to half resolution and half color depth. Hence no simple correlation between max rated panel refresh rate and the speed of actual frame copy operation. Although in this particular case its probably does not matter in the end result, I'm not confinced that this is the actual case. So it is still an assumption to me.
 
Last edited by a moderator:
You need to potentially read a frame while writing the next frame
For what do you need it? Look at the pcper test I linked before. The jumps between 30Hz and 25Hz basically prove the whole thing has no advantage whatsoever compared to a simple retransmit of the current frame over DP.
I kind of expected this response :smile: But my problem with this is, for example, on a 60 Hz rated monitor you would still stuck at 60 Hz top even if the monitor is set to half resolution and half color depth. Hence no simple correlation between max rated panel refresh rate and the speed of actual frame copy operation. Although in this particular case its probably does not matter in the end result, I'm not confinced that this is the actual case. So it is still an assumption to me.
So what? It is a very reasonable assumption (Occam's razor and such). If you are not convinced, you could tell us the reasons, why you are doubting ;). I laid out some reasoning, why I think the gsync module is largely redundant and unnecessarily costly.
 
What has been demoed is generating the same effects to the end user. Any theory generation in in wonderment of why G-Sync is going to the expense of adding this module in when the end result can be produced without it (if implemented in the GPU).

Any idea how we can find out if our monitors support variable vblank Dave?
 
Does variable blanking needs any support on the protocol layer at all? As far as I can see there is no real mandate on keeping the blanking interval consistent.
 
A few quotes from the comments section in the Anand article, anyone care to comment on this?

I don't think this is equivalent to GSync. GSync works by making the monitor board holding VBLANK until GPU sends an image. FreeSync uses a VESA standard to change VBI speculatively depends on what the driver thinks the next VBLANK should be. There is software overhead first, and it won't work for the most important frames, when the framerate fluctuates, so you will still see tearing and stutter occasionally. If the app runs in constant frame, like how AMD's demo is doing, then the driver should be able to speculate properly and get the correct VBLANK configured. With the pendulum demo NVIDIA has however, since the framerate can fluctuate, FreeSync won't work nearly as well.

VBLANK-twiddling, while possible under the VESA specifications, isn't standardised. Monitors may do anything from displaying the frames when you send weird VBLANKs, displaying garbage, partial or duplicate frames, or just "NO SIGNAL". You can bet the freesync implementation used on that Satellite Click only works for one specific panel.

Because it's NOT a VESA standard. The ability to vary VBLANK IS part of the standard, but there's no guidance on what monitors need to do with it other than 'VBLANK may vary sometimes, deal with it'. "NO SIGNAL" is a perfectly valid response to a 'malformed' VBLANK under the standard.
 
As long as the display doesn't freak out if the graphics card asserts a variable length vertical blanking interval, you don't need to program it to a specific value before. For what purpose? It's not like that the transfer of a new frame would be initiated by the display, anyway. After all, the graphics sends what it wants to send, the display just needs to accept the signal ;). And DP transmits at a higher speed than necessary for the chosen display mode anyway and usually intersperses short breaks between the packages to spread it out over the (traditionally fixed) refresh intervall (to minimize the necessary buffering under the assumption, the display can't program the pixel matrix faster than with the chosen refresh rate).
What is needed is that a fast link speed is chosen and these breaks are omitted or at least reduced (to hit the maximum speed the display is capable of accepting) and the link would simply sit idle in its vblank period until the rendering of the next frame is finished. If the display communicates its minimum/maximum vblank interval capabilities to the graphics card, it should be okay.
 
That only makes sense if the GPU doesn't support a variably timed output directly. That is basically what the AMD guy at CES suspected. In any case, it apparently results in quite a large waste of resources (~160mm² FPGA + 768MB RAM, >$100 costs [FPGAs are relatively expensive, the $100 assumes already a huge rebate]).

I don't recall which review noted this, but the FPGA part supposedly due to be replaced by an ASIC.
Given the early state of the tech and uncertain adoption, the volumes in question and the risk of a design change would make an FPGA's higher per unit more palatable than designing one or more ASICs and mask sets, then servicing a fraction of a small subset of the display market.

edit: Time to market should be better as well.
 
I don't recall which review noted this, but the FPGA part supposedly due to be replaced by an ASIC.
Given the early state of the tech and uncertain adoption, the volumes in question and the risk of a design change would make an FPGA's higher per unit more palatable than designing one or more ASICs and mask sets, then servicing a fraction of a small subset of the display market.

edit: Time to market should be better as well.

I was think this have to do with R&D and implementation on the monitor manufacturer.. They dont need to modify their own hardware, and like that can implement G-Sync way more easely as it just completely replace the part who will need to be modified. ( just have to look what it replace in the Asus panel ). In addition it permit to mod existing monitor ( so no need to wait manufacturers do a new lineup of monitors, they can reduce cost by use the actual ones ( this was the case with Asus, who have use the same 27" 144hz and just install the board in it )..

I dont say it add nothing more, as the card is here, they have maybe configure the software differently. And i will not be surprised that the add-in board is checking the monitor is connected to an Nvidia gpu..
 
Last edited by a moderator:
A few quotes from the comments section in the Anand article, anyone care to comment on this?

So that guy posted the same thing, creating a new account, across 4 different forums.
I called him a shill and he took offence but now I am back thinking shill.

He also stated he doesn't know or has read anything about VESA standards, eDP and DDM.
 
Back
Top