Ok, scrap what I said. Now I think its actually simpler than that. You can't do that because in case of G-Sync, the video card only keep the back buffer.
So double buffering. Fine, no problem.
And there can't be any VSync induced delay because that is one of the main selling points of G-Sync; to eliminate VSync input lag.
Above the max refresh rate of the panel, GSYNC is behaving the same as a vsynced display. Very plainly, GSYNC just reverses the synchronization, the scanout is not synchronized to the fixed refresh of the panel, but the refresh of the panel is synchronized to the scanout (within the limits of the panel), which starts only after completing a frame. But it is still always synchronized.
And when the frame rate is higher than max panel refresh rate; the latest frame image is stored in the G-Sync memory module. No delay necessary.
It shouldn't matter too much, where this is stored.
I don't see how this zero tearing and zero VSync delay is simultaneously achievable in any other way.
Very simple, add a variable vblank time (from zero to a panel specific maximum defining the minimum refresh rate) after the transfer of a frame und only start transmitting the next frame after the buffer flip (completion of the next rendered frame) on the GPU. If the maximum vblank time expired, transfer the old frame again (and optionally prevent the buffer flip during transfer to avoid tearing, adds a minor delay, but only when dropping below 30 or 24Hz [30Hz=>25Hz, 24Hz=>20,5Hz, maybe even less when transferring at highest supported DP speeds]). Hitting the maximum refresh rate basically enables classic vsync, same as with gsync.
[edit]
I just see, that GSYNC suffers from this exact problem as noted by pcper in the test of the Asus DIY kit (they see the 30=>25Hz jumping).
[/edit]
==========================
Pardon me, but I am not convinced they are similar at all.
So, what's the difference in your opinion?
And AMD keeps coming up with different theories doesn't paint them in any good picture, they seem to be not having a solid grasp on the situation.
Have you a solid idea, what nV is doing with that expensive FPGA and 768MB of RAM on a GSYNC module? It shouldn't be needed in my opinion (I stated that already right after the GSYNC presentation).
Varying refresh rate at different intervals creates many problems for the current implementation of LCDs , color fidelity, gamma .. etc. So it is not as easy as flipping a switch, which is also the same reason for G.Sync repeating the previous frame beyond the 33.3ms interval delay limit.
The panel driver electronics takes care of all of this stuff. You know, most panels support different refresh rates already. One "only" needs to add support (if it is not already there as with more and more panels in mobile devices) for changing the refresh period with each refresh (and not set it for longer time periods). Depending on the specific panel, this may be extremely simple (or not) and cheap (as it doesn't require additional chips or even FPGAs and large amounts of memory), but in any case it should be the task of the manufacturer of the panel. They should know best, what the panel can do and adapt the driving electronics to the panel anyway.
NVIDIA uses 3 memory modules with a total size of 768MB to maximise memory bandwidth at the display end.
Do you know for what the 768MB are needed in the first place?
========================
This is the source of the difference in our understanding how G-Sync works. On fixed refresh rate system VBlank signal only happen when the display begin scanning the front buffer. But I am not aware of any way the display can tell the video card that it has finished refreshing. So do you know in fact that it is part of DP spec or you just need us to make this assumption in order to make your theory acceptable?
The display doesn't need to tell the GPU that is has finished refreshing. It just needs to communicate the minimal and maximal allowed timings. This is basically done already. As long as the GPU adheres to this, it should work.