AMD demonstrates Freesync, G-sync equivalent?

I will say it differently, i never read the comment made on Anandtech site articles.. its offtly full of joke ( hopefully the forum is a bit different ).
 
And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).
 
Last edited by a moderator:
And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).

with G-sync the gpu is polling for vblank state, who take 1ms as reported in Anandtech article ( this reduce the framerate ( 3-5% ), but thats not the question ) I dont know if dont understand well the phrase of Anandtech, who write that nvidia want eliminate the polling ( does it say reduce the time for do it or completely remove it ). And if they remove this polling, what can be the possibility ?
 
with G-sync the gpu is polling for vblank state, who take 1ms as reported in Anandtech article ( this reduce the framerate ( 3-5% ), but thats not the question ) I dont know if dont understand well the phrase of Anandtech, who write that nvidia want eliminate the polling ( does it say reduce the time for do it or completely remove it ). And if they remove this polling, what can be the possibility ?
Frankly, this explanation (polling for vblank state, what is that supposed to mean?) doesn't make much sense to me. So who knows what nV meant when they talked to Anand.
 
So what? It is a very reasonable assumption (Occam's razor and such). If you are not convinced, you could tell us the reasons, why you are doubting ;). I laid out some reasoning, why I think the gsync module is largely redundant and unnecessarily costly.

Following Occam's Razor would lead us straight into the hypothesis that the FPGA and 768 MiB DDR3 are necessary, because otherwise they wouldn't be there in the first place. That's the hypothesis with the fewest possible assumptions, Sir. ;)

Now, I am making additional assumptions:
- The three modules are there to (a) increase memory bandwidth, (b) buffer completely different, entire frames* and (c) probably to smooth frames from MGPU-Systems additionally.

*for example one being written by the GPU, one being scanned out to display and one - well maybe for (c).
 
Have you a solid idea, what nV is doing with that expensive FPGA and 768MB of RAM on a GSYNC module? It shouldn't be needed in my opinion (I stated that already right after the GSYNC presentation).
So the basis of the argument is that all of this is what? a hoax?
What AMD is doing doesn't seem to 1-follow the same technical solutions to the problem, depending merely on V.Sync and software shortcuts. 2-Get the same results as NVIDIA, otherwise they would have released a clearer demo on a better display.
 
Following Occam's Razor would lead us straight into the hypothesis that the FPGA and 768 MiB DDR3 are necessary, because otherwise they wouldn't be there in the first place. That's the hypothesis with the fewest possible assumptions, Sir. ;)
Not if you start from the question, what would be needed in hardware to support it (which I claimed to be very little, I was surprised by the massive amount of hardware nV throwed at the problem already at the GSync presentation, it never made much sense; it's not a new argument I just came up with, I said so already a few months back).
Edit: I just checked the context of the Occam's razor argument. It was that the display doesn't have to tell the GPU when it has finished its refresh but the minimum and maximum allowed timing could be reported by the display (displays do this already, just a few extensions may be helpful so the GPU doesn't have to play unnecessarily safe).
Now, I am making additional assumptions:
- The three modules are there to (a) increase memory bandwidth,
For what do you need more? In pcper's test it behaves exactly as if the additional frame delay (when dropping just below 30Hz) would be caused by retransfer the old frame over the DP connection? => Not conclusive at all.
(b) buffer completely different, entire frames*
Obviously not done, see pcper's test.
and (c) probably to smooth frames from MGPU-Systems additionally.
What is the advantage to doing it on the host?
It simply doesn't add up.

==================

So the basis of the argument is that all of this is what? a hoax?
That can't be the basis. It could be only the conclusion. But I'm not saying this. I'm only saying that it appears to be an incredibly wasteful implementation.
What AMD is doing doesn't seem to
1-follow the same technical solutions to the problem, depending merely on V.Sync and software shortcuts.
How do you want to know it is a pure software approach? I mentioned the needed flexibility of the display engines. This would make it a hardware approach with much less added hardware (as very little hardware is needed, it's basically a tiny addition to the existing display engines).
2-Get the same results as NVIDIA, otherwise they would have released a clearer demo on a better display.
I guess the purpose of the demo was to show that it could work with available hardware without any modifications. I wouldn't conclude from a suboptimal demo that it isn't capable of more (this simply can't be decided from the demo). AMD isn't famous for their pitch perfect marketing last time I checked.
 
Last edited by a moderator:
I imagine that AMDs presentation was never meant to be seen by the public. Probably some internal proof of concept that someone from marketing managed to get a whiff of and decided to show it off to draw attention away from Nvidia. And if that is what happened, its worked somewhat. People, as seen in this thread, are even casting some doubt over Nvidia's method.
 
Gipsel said:
As long as the display doesn't freak out if the graphics card asserts a variable length vertical blanking interval, you don't need to program it to a specific value before. For what purpose? It's not like that the transfer of a new frame would be initiated by the display, anyway. After all, the graphics sends what it wants to send, the display just needs to accept the signal

Technically the display is indeed the initiator of the the frame transfer. The graphics has no way to tell the display when to start reading the front buffer, all it has to do is to prepare the data in the front buffer for reading by the display. The aforementioned eDP VESA standard does not change anything about it. The graphics card can only tell the display to switch refresh rate, and an eDP compliant system can do this switching process seamlessly (e.g without the need to reinitialize the display.)

Gipsel said:
And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).

What you are implying here is that the graphics has the ability to tell the display when to start reading the front buffer. Which is not the case with the eDP standard as explained above. Buffer flip does not trigger the scanout.

Not if you start from the question, what would be needed in hardware to support it (which I claimed to be very little, I was surprised by the massive amount of hardware nV throwed at the problem already at the GSync presentation, it never made much sense; it's not a new argument I just came up with, I said so already a few months back).

That question is still has to be based on several assumptions. The first is that FreeSync is the same with G-Sync. But as far as what has been demoed by AMD, we can only conclude it is only produce the same effect if the frame rate is relatively constant. Another assumption is this eDP feature is applicable to external display. Which I dont think it is, as that VESA is specifically designed for embedded display, thus does not taking care of the possible varying implementation on the external displays' end. And implementing it outside the scope of its specification would no longer make it VESA standard compliant.
 
Technically the display is indeed the initiator of the the frame transfer. The graphics has no way to tell the display when to start reading the front buffer, all it has to do is to prepare the data in the front buffer for reading by the display.
Not at all. The display can't read the frontbuffer on its own, of course. It is all determined by the graphics card (traditionally by some clocks and timers) when to send what data and synchronization signals. The display just locks to that (within its capabilities it communicates to the graphics card beforehand). It worked like that even in the old analog days.
 
Nvidia responds to AMD's demo:

He first said, of course, that he was excited to see his competitor taking an interest in dynamic refresh rates and thinking that the technology could offer benefits for gamers. In his view, AMD interest was validation of Nvidia's work in this area.

However, Petersen quickly pointed out an important detail about AMD's "free sync" demo: it was conducted on laptop systems. Laptops, he explained, have a different display architecture than desktops, with a more direct interface between the GPU and the LCD panel, generally based on standards like LVDS or eDP (embedded DisplayPort). Desktop monitors use other interfaces, like HDMI and DisplayPort, and typically have a scaler chip situated in the path between the GPU and the panel. As a result, a feature like variable refresh is nearly impossible to implement on a desktop monitor as things now stand.

That, Petersen explained, is why Nvidia decided to create its G-Sync module, which replaces the scaler ASIC with logic of Nvidia's own creation. To his knowledge, no scaler ASIC with variable refresh capability exists—and if it did, he said, "we would know." Nvidia's intent in building the G-Sync module was to enable this capability and thus to nudge the industry in the right direction.

When asked about a potential VESA standard to enable dynamic refresh rates, Petersen had something very interesting to say: he doesn't think it's necessary, because DisplayPort already supports "everything required" for dynamic refresh rates via the extension of the vblank interval. That's why, he noted, G-Sync works with existing cables without the need for any new standards. Nvidia sees no need and has no plans to approach VESA about a new standard for G-Sync-style functionality—because it already exists.

That said, Nvidia won't enable G-Sync for competing graphics chips because it has invested real time and effort in building a good solution and doesn't intend to "do the work for everyone." If the competition wants to have a similar feature in its products, Petersen said, "They have to do the work. They have to hire the guys to figure it out."

http://techreport.com/news/25878/nvidia-responds-to-amd-free-sync-demo
 
It seems AMD is coming up with new theories as the time passes .. so now it's all eDP standards exclusive to laptops and designed for power saving (with questionable quality), and AMD will wait until DP 1.3 becomes a standard and gains manufacturer adoption for it's "freesync" to be real on a desktop displays.

Koduri did admit that NVIDIA deserved credit for seeing this potential use of the variable refresh feature and bringing it to market as quickly as they did. It has raised awareness of the issue and forced AMD and the rest of the display community to take notice. But clearly AMD's goal is to make sure that it remains a proprietary feature for as little time as possible.

As it stands today, the only way to get variable refresh gaming technology on the PC is to use NVIDIA's G-Sync enabled monitors and GeForce graphics cards. It will likely take until the ratification and release of DisplayPort 1.3 monitors before AMD Radeon users will be able to enjoy what I definitely believe is one of the best new technologies for PC gaming in years. AMD is hopeful it will happen in Q3 of 2014 but speed of integration has never been a highlight of the DisplayPort standard. NVIDIA definitely has an availability advantage with G-Sync but the question will be for how many months or quarters it will last.
http://www.pcper.com/reviews/Graphi...h-FreeSync-Could-Be-Alternative-NVIDIA-G-Sync
 
Even if its currently only possible on notebooks it's still a good thing. And its free. Now the question really is, can NV mobile GPUs do the same?
 
So Petersen pretty much confirms that it can be done properly (otherwise he would have denied it) on laptops / eDP - so the "speculative framerate prediction" speculation brought up is wrong.
And the DP protocol should be capable of it too, but the too-intrusive scalers on most displays are the problem. And we should, as suspected, see the g-sync module as a proof of concept replacement scaler/displaycontroller, not as an necessary addition. (and very much a prototype in it's rough/expensive hardware implementation).

I guess we've already seen most of the gsync enabled monitor models there will be before a proper standard takes over ( / display manufacturers adds the neccesary support to their own display controllers)
 
It seems AMD is coming up with new theories as the time passes .. so now it's all eDP standards exclusive to laptops and designed for power saving (with questionable quality), and AMD will wait until DP 1.3 becomes a standard and gains manufacturer adoption for it's "freesync" to be real on a desktop displays.

http://www.pcper.com/reviews/Graphi...h-FreeSync-Could-Be-Alternative-NVIDIA-G-Sync
Is PCPerspective owned by AMD? There isn't a direct quote in the entire article ...
 
So Petersen pretty much confirms that it can be done properly (otherwise he would have denied it) on laptops / eDP - so the "speculative framerate prediction" speculation brought up is wrong.
And the DP protocol should be capable of it too, but the too-intrusive scalers on most displays are the problem. And we should, as suspected, see the g-sync module as a proof of concept replacement scaler/displaycontroller, not as an necessary addition. (and very much a prototype in it's rough/expensive hardware implementation).
That sums it up quite nicely in my opinion.

And Petersen also stated pretty clearly, that nV's GSync is also just fiddling with the vblank interval to achieve its effect, so no mystery functionality hidden in that FPGA (probably just used because of the current prototype/proof-of-concept status of the gsync board as you mentioned):
the same techreport article said:
When asked about a potential VESA standard to enable dynamic refresh rates, Petersen had something very interesting to say: he doesn't think it's necessary, because DisplayPort already supports "everything required" for dynamic refresh rates via the extension of the vblank interval. That's why, he noted, G-Sync works with existing cables without the need for any new standards. Nvidia sees no need and has no plans to approach VESA about a new standard for G-Sync-style functionality—because it already exists.
 
That sums it up quite nicely in my opinion.

And Petersen also stated pretty clearly, that nV's GSync is also just fiddling with the vblank interval to achieve its effect, so no mystery functionality hidden in that FPGA (probably just used because of the current prototype/proof-of-concept status of the gsync board as you mentioned):

So basically GSync is just a short term fix that will be dead once monitors start to support variable vblank in their scalers. As they admit that is the only thing preventing it on non-laptop displays.

Regards,
SB
 
Given that G-Sync modules replace the scalers there is some motivation on the scaler vendors side.
 
Back
Top