A few quotes from the comments section in the Anand article, anyone care to comment on this?
And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).So TPU thinks 'FreeSync' is inferior to GSync.
http://www.techpowerup.com/196557/amd-responds-to-nvidia-g-sync-with-freesync.html
And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).
Frankly, this explanation (polling for vblank state, what is that supposed to mean?) doesn't make much sense to me. So who knows what nV meant when they talked to Anand.with G-sync the gpu is polling for vblank state, who take 1ms as reported in Anandtech article ( this reduce the framerate ( 3-5% ), but thats not the question ) I dont know if dont understand well the phrase of Anandtech, who write that nvidia want eliminate the polling ( does it say reduce the time for do it or completely remove it ). And if they remove this polling, what can be the possibility ?
So what? It is a very reasonable assumption (Occam's razor and such). If you are not convinced, you could tell us the reasons, why you are doubting . I laid out some reasoning, why I think the gsync module is largely redundant and unnecessarily costly.
So the basis of the argument is that all of this is what? a hoax?Have you a solid idea, what nV is doing with that expensive FPGA and 768MB of RAM on a GSYNC module? It shouldn't be needed in my opinion (I stated that already right after the GSYNC presentation).
Not if you start from the question, what would be needed in hardware to support it (which I claimed to be very little, I was surprised by the massive amount of hardware nV throwed at the problem already at the GSync presentation, it never made much sense; it's not a new argument I just came up with, I said so already a few months back).Following Occam's Razor would lead us straight into the hypothesis that the FPGA and 768 MiB DDR3 are necessary, because otherwise they wouldn't be there in the first place. That's the hypothesis with the fewest possible assumptions, Sir.
For what do you need more? In pcper's test it behaves exactly as if the additional frame delay (when dropping just below 30Hz) would be caused by retransfer the old frame over the DP connection? => Not conclusive at all.Now, I am making additional assumptions:
- The three modules are there to (a) increase memory bandwidth,
Obviously not done, see pcper's test.(b) buffer completely different, entire frames*
What is the advantage to doing it on the host?and (c) probably to smooth frames from MGPU-Systems additionally.
That can't be the basis. It could be only the conclusion. But I'm not saying this. I'm only saying that it appears to be an incredibly wasteful implementation.So the basis of the argument is that all of this is what? a hoax?
How do you want to know it is a pure software approach? I mentioned the needed flexibility of the display engines. This would make it a hardware approach with much less added hardware (as very little hardware is needed, it's basically a tiny addition to the existing display engines).What AMD is doing doesn't seem to
1-follow the same technical solutions to the problem, depending merely on V.Sync and software shortcuts.
I guess the purpose of the demo was to show that it could work with available hardware without any modifications. I wouldn't conclude from a suboptimal demo that it isn't capable of more (this simply can't be decided from the demo). AMD isn't famous for their pitch perfect marketing last time I checked.2-Get the same results as NVIDIA, otherwise they would have released a clearer demo on a better display.
Gipsel said:As long as the display doesn't freak out if the graphics card asserts a variable length vertical blanking interval, you don't need to program it to a specific value before. For what purpose? It's not like that the transfer of a new frame would be initiated by the display, anyway. After all, the graphics sends what it wants to send, the display just needs to accept the signal
Gipsel said:And are possibly wrong with the part about that the driver needs to set a speculative vblank interval beforehand. It should work also otherwise (just delaying the transfer of the next frame and staying in the vblank state up to the maximally allowed time when a retransmit of the frame needs to occur). GSYNC is probably doing nothing else (as anandtech claims and pcper's test implies). If AMD can do the same completely without any hardware changes to their GPU really depends only on how flexible their display engines really are (How can a scanout be triggered? Only by a timer [the traditional fixed framerate approach] or also by the buffer flip itself?).
Not if you start from the question, what would be needed in hardware to support it (which I claimed to be very little, I was surprised by the massive amount of hardware nV throwed at the problem already at the GSync presentation, it never made much sense; it's not a new argument I just came up with, I said so already a few months back).
Not at all. The display can't read the frontbuffer on its own, of course. It is all determined by the graphics card (traditionally by some clocks and timers) when to send what data and synchronization signals. The display just locks to that (within its capabilities it communicates to the graphics card beforehand). It worked like that even in the old analog days.Technically the display is indeed the initiator of the the frame transfer. The graphics has no way to tell the display when to start reading the front buffer, all it has to do is to prepare the data in the front buffer for reading by the display.
He first said, of course, that he was excited to see his competitor taking an interest in dynamic refresh rates and thinking that the technology could offer benefits for gamers. In his view, AMD interest was validation of Nvidia's work in this area.
However, Petersen quickly pointed out an important detail about AMD's "free sync" demo: it was conducted on laptop systems. Laptops, he explained, have a different display architecture than desktops, with a more direct interface between the GPU and the LCD panel, generally based on standards like LVDS or eDP (embedded DisplayPort). Desktop monitors use other interfaces, like HDMI and DisplayPort, and typically have a scaler chip situated in the path between the GPU and the panel. As a result, a feature like variable refresh is nearly impossible to implement on a desktop monitor as things now stand.
That, Petersen explained, is why Nvidia decided to create its G-Sync module, which replaces the scaler ASIC with logic of Nvidia's own creation. To his knowledge, no scaler ASIC with variable refresh capability exists—and if it did, he said, "we would know." Nvidia's intent in building the G-Sync module was to enable this capability and thus to nudge the industry in the right direction.
When asked about a potential VESA standard to enable dynamic refresh rates, Petersen had something very interesting to say: he doesn't think it's necessary, because DisplayPort already supports "everything required" for dynamic refresh rates via the extension of the vblank interval. That's why, he noted, G-Sync works with existing cables without the need for any new standards. Nvidia sees no need and has no plans to approach VESA about a new standard for G-Sync-style functionality—because it already exists.
That said, Nvidia won't enable G-Sync for competing graphics chips because it has invested real time and effort in building a good solution and doesn't intend to "do the work for everyone." If the competition wants to have a similar feature in its products, Petersen said, "They have to do the work. They have to hire the guys to figure it out."
http://www.pcper.com/reviews/Graphi...h-FreeSync-Could-Be-Alternative-NVIDIA-G-SyncKoduri did admit that NVIDIA deserved credit for seeing this potential use of the variable refresh feature and bringing it to market as quickly as they did. It has raised awareness of the issue and forced AMD and the rest of the display community to take notice. But clearly AMD's goal is to make sure that it remains a proprietary feature for as little time as possible.
As it stands today, the only way to get variable refresh gaming technology on the PC is to use NVIDIA's G-Sync enabled monitors and GeForce graphics cards. It will likely take until the ratification and release of DisplayPort 1.3 monitors before AMD Radeon users will be able to enjoy what I definitely believe is one of the best new technologies for PC gaming in years. AMD is hopeful it will happen in Q3 of 2014 but speed of integration has never been a highlight of the DisplayPort standard. NVIDIA definitely has an availability advantage with G-Sync but the question will be for how many months or quarters it will last.
Is PCPerspective owned by AMD? There isn't a direct quote in the entire article ...It seems AMD is coming up with new theories as the time passes .. so now it's all eDP standards exclusive to laptops and designed for power saving (with questionable quality), and AMD will wait until DP 1.3 becomes a standard and gains manufacturer adoption for it's "freesync" to be real on a desktop displays.
http://www.pcper.com/reviews/Graphi...h-FreeSync-Could-Be-Alternative-NVIDIA-G-Sync
That sums it up quite nicely in my opinion.So Petersen pretty much confirms that it can be done properly (otherwise he would have denied it) on laptops / eDP - so the "speculative framerate prediction" speculation brought up is wrong.
And the DP protocol should be capable of it too, but the too-intrusive scalers on most displays are the problem. And we should, as suspected, see the g-sync module as a proof of concept replacement scaler/displaycontroller, not as an necessary addition. (and very much a prototype in it's rough/expensive hardware implementation).
the same techreport article said:When asked about a potential VESA standard to enable dynamic refresh rates, Petersen had something very interesting to say: he doesn't think it's necessary, because DisplayPort already supports "everything required" for dynamic refresh rates via the extension of the vblank interval. That's why, he noted, G-Sync works with existing cables without the need for any new standards. Nvidia sees no need and has no plans to approach VESA about a new standard for G-Sync-style functionality—because it already exists.
That sums it up quite nicely in my opinion.
And Petersen also stated pretty clearly, that nV's GSync is also just fiddling with the vblank interval to achieve its effect, so no mystery functionality hidden in that FPGA (probably just used because of the current prototype/proof-of-concept status of the gsync board as you mentioned):