Nvidia G-SYNC

Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.

Did anyone actually play a game on this?
 
So your saying that no future one will, since that is what my question was relating too?
Yes , Ubisoft didn't announce any GPU PhysX game now or in the future.

List of PhysX games this year :
PLanetside 2 , Hawken , Warframe , The Bureau XCOM , Metro Last Light , Batman Origin , Call Of Duty Ghosts , Rise Of The Triad

Next year :
Project Cars , Star Citizen , Witcher 3 and EverQuest Next

With triple buffering: the lag will be less, but the simulation time stamps can be spread all over the 16ms of the frame -> lower lag, but less smooth, since the delay between internal time stamp and pixel visible on screen is now variable instead of fixed.

At least in theory: according to many reports on the web, triple buffering led to increased lag is many cases, which is something I don't understand.

In my experience it does produce severe lag in many games , the cases where it doesn't produce lag are the exceptions (like Max Payne 2).

I think any form of frame queuing is bound to cause lag , because immediate player interaction with the frame is delayed, and triple buffering is a form of frame queuing.. you can even achieve something close to quadrible buffering by increasing the number of Render Frame Aheads in the driver's control panel , NVIDIA's term for this is Max Pre-Rendered Frames.

Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.

Did anyone actually play a game on this?

But it will be less stutter none the less , right ? instead of both visual and input issue , now u only have an input issue .
 
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth. Did anyone actually play a game on this?
Would you agree this thing, at constant 50fps, will give a way better experience than before?

If you look at the frame time graphs of TechReport, you'd see that they have pretty good short term correlation, even if they don't in the long term.

And who in the world does variable lag destroy visual smoothness?

I still have to hear the first reasonable argument for you about what is worse about this than what currently exists from a technological/experience point of view.
 
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.

Did anyone actually play a game on this?

define noise = expected frame time (say 30ms/frame) - actual frame time.

Without gsync, noise sources are input lag and mismatch between GPU and display.

With gsync, noise source is input lag.

You are reducing total power in noise signal, ergo, noise amplitude will go down.
 
define noise = expected frame time (say 30ms/frame) - actual frame time. Without gsync, noise sources are input lag and mismatch between GPU and display. With gsync, noise source is input lag. You are reducing total power in noise signal, ergo, noise amplitude will go down.
This thing is such a no-brainer...
 
I've checked the prices on G-Sync able graphics cards. With Radeon R7/R9 launch they have gotten pretty cheap : seen a 650 ti Boost 2GB at 126 euros :oops: and a GTX 660 at 150 euros.
Incidentally support for G-Sync coincides with support for 4K 60Hz (I might have typed something redundant here)

But whatever, we need displays. A 1920x1200 120Hz would be sweet, a 1600x900 would be okay if someone made it (at least you can still run a maximized web browser with that).
Hell why not some creativity on res, I think a 2048x1280 monitor at 100Hz would be really fine.
 
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.

Did anyone actually play a game on this?

You're describing a different issue to that which G-SYNC is designed to solve. I think most would class it a considerably less troublesome issue too.
 
Wrt triple buffering, the way I understand it:

in double buffering with vsync, if you can keep up with 60fps, the games will be rendered at 60fps and the game engine's internal simulation timer will be locked to it -> smoothness, but guaranteed 16ms lag: even if your GPU is done in 1ms, you'll need to wait 15ms for the next refresh to start.

With triple buffering: the lag will be less, but the simulation time stamps can be spread all over the 16ms of the frame -> lower lag, but less smooth, since the delay between internal time stamp and pixel visible on screen is now variable instead of fixed.

At least in theory: according to many reports on the web, triple buffering led to increased lag is many cases, which is something I don't understand.
Because nowadays the traditional triple buffering appears to be not always (rarely?) used. Instead of having two backbuffers and the engine free running and changing between them while at each vsync the latest completed backbuffer is flipped to front, one has a queue of 3 buffers increasing the lag. I don't get how someone could come up with that.
In my experience it does produce severe lag in many games , the cases where it doesn't produce lag are the exceptions (like Max Payne 2).

I think any form of frame queuing is bound to cause lag , because immediate player interaction with the frame is delayed, and triple buffering is a form of frame queuing.
Actually, it should not. Especially if the framerate just drops below the refreshrate, triple buffering should in fact lower the average frame latency compared to vsynced double buffering. If that's not happening, it's not real triple buffering.
 
Actually, it should not. Especially if the framerate just drops below the refreshrate, triple buffering should in fact lower the average frame latency compared to vsynced double buffering. If that's not happening, it's not real triple buffering.
That's indeed my understanding. Lower average lag. But more erratic visual motion.

If the lag is higher, it's because of implementation mistakes (which seem to be very common.)
 
Well in my desperate rummaging to find any reason to use this, I have found something that should be quite compelling: watching video. Since video comes in a stupid variety of framerates (~24, 25, ~30, 48 and multiples thereof) a setup like this "wouldn't care". Though frame rates lower than 30 need to be multiplied first.

A monitor that natively does 100, 120 and 144 Hz would also be fine.
 
Anyhow , I have a hard time distinguishing the advantage of G.sync over the traditional Adaptive V.sync that NVIDIA itself introduced! Both remove visual tearing but input lag still remains .. so what gives?

And Jawed is right , a monitor with a 100+ Hz will do just fine ! without the hassle of a crappy v.sync or otherwise.

If that's not happening, it's not real triple buffering.
So developers mistake queuing with Triple Buffering?
 
Anyhow , I have a hard time distinguishing the advantage of G.sync over the traditional Adaptive V.sync that NVIDIA itself introduced! Both remove visual tearing but input lag still remains .. so what gives?
Adaptive Vsync enables Vsync above the threshold of 60Hz, but disables it below. So below 60Hz you do get tearing. But it has the advantage over pure Vsync that the lag doesn't go sky high when you're below 60fps.

And Jawed is right , a monitor with a 100+ Hz will do just fine ! without the hassle of a crappy v.sync or otherwise
'will do just fine' != perfect. With 100Hz refresh and vsync on, you immediately fall back to 50Hz when you don't quite make it. And with vsync off, you still get tearing. It's going to be less noticeable. Good enough probably for you, me, and many others. But it's fundamentally still there. When you spend $700+ on a GPU, it's not unreasonable to expect perfection.
 
Adaptive Vsync enables Vsync above the threshold of 60Hz, but disables it below. So below 60Hz you do get tearing.
Damn it , forgot about that ! I guess my head is spinning with all the v.sync, g.sync, triple buffering, queuing, tearing above and below refresh rate .. etc. It's a bloody mess !:mad:

It's going to be less noticeable. Good enough probably for you, me, and many others. But it's fundamentally still there. When you spend $700+ on a GPU, it's not unreasonable to expect perfection.
I guess the deciding factor here is that no modern game would push even close to 90fps at 1080p with all the eye candy settings no matter what single GPU is used .. even multi GPUs don't usually achieve that. Still these games are bound to get old and render like crazy on future GPUs, so I guess the value of G.sync in this case is future proofing -ironically- against old games ! :smile:
 
I think NVIDIA could have made more money by implementing PhysX in OpenCL or Compute Shaders. A lot more games would have used it, and since NVIDIA would have been in control and able to finely tune it for their own architecture, it would have favored them in benchmarks, hence the competitive advantage.

Agreed. Just as AMD did with TressFX, AFAIK.




OpenCL and ComputeShaders didn't yet exist when GPU PhysX was born. OpenCL even today still isn't ready for prime time.
nVidia bought AGEIA in February 2008. OpenCL 1.0 was officially released 10 months later.
I find it hard to believe that nVidia bought AGEIA and implemented the PhysX into CUDA in less than 10 months.
Nonetheless, we're talking about a really small difference in time between PhysX in GPUs and OpenCL being released.



USB, SSE, x64 and so on became successful and universal because they're NOT proprietary. The same thing goes for the entirety of the PC (except intel's been killing off all of its other competitors one by one over the years, but that's a different discussion.) Proprietary = dead, or at best, languishing. Free, and at least decently useful at its designed task = ubiqutous and popular and successful and...not dead. :p


I agree with all of that, except I think you mean "licenseable for cheap" instead of proprietary.

I think nVidia got used to trying really hard to cock-block technologies and features in games from other GPU vendors - more than just trying to get better performance and stability - despite what they do to make the industry go forward.
And even though I'm not really fond of Mantle being exclusive to GCN overall, for me it's really nice to see nVidia being served their own poison for once.

Just as a sidenote: using PhysX is a terrible example to use when discussing how nVidia is taking things forward.
I think most people forgot that PhysX was not a nVidia initiative. AGEIA PhysX was a software+hardware implementation with their own PPU cards. Those PPU cards could be used in games on par with either AMD/ATI, nVidia, Intel or SiS or whatever GPUs.
nVidia bought AGEIA so that they could block people with AMD/ATI GPUs from using the high-tier mode of PhysX in games.
This is further proven with the fact that nVidia purposedly blocks access to hardware accelerated PhysX if an AMD GPU is detected in the system. There's actually a function in the driver that sweeps the system after an AMD GPU so that it will disable PhysX and sometimes CUDA.
It's borderline malware-ish.




As for the presentation and talks on G-Sync:

- John Carmack is obviously someone who deserves credit for lots of stuff that happens in the games industry. He's probably a god of programming, maths, physics, optimization, etc.
That said, he's also extremely nVidia-biased so I don't think we can't count on objective opinions from him. More: I don't really get if he's more pro-nVidia than anti-AMD.
I believe he is actually enthusiastic about GSync. Mostly because he has always admitted that, as a gamer, he prefers fast-paced dumb shooters than anything scenario/story-driven. Plus, he always championed for 60+ FPS above all so one can see how fixated he is with that.
So to me, it was predictable that he would dismiss Mantle as something unnecessary because he thinks that nVidia+OpenGL does the same and that he would prefer G-Sync to 4K.
This guy is going to have a bad time with all the AMD/ATI involvement he's going to have during this next console generation.

- Tim Sweeney is less of a fanboy than Carmack.. but he seems to be one nonetheless. For several times he states how they "mostly" use NV hardware for development. (What's the point in that?!? Whatever...) I think his enthusiasm with GSync has more to do with paychecks than the tech itself, but I could be wrong.

- Johan Andersson was for me the guy who seemed to be the most honest, and the one with his feet on the ground. Open for discussion, not willing to bash one competitor just because he's in another competitor's conference, stating several times how he enjoys exploring all options.. Without him, I think the live round-table would have quickly turned into a boring and very long PR announcement.
His involvement is what makes me think that G-Sync isn't a worthless gimmick, and it might be interesting. I'm still not going to buy it because I don't want to buy a monitor for a graphics vendor. The would be really stupid of me, since I tend to keep my monitors for almost 10 generations of graphics cards.
 
Last edited by a moderator:
- John Carmack is obviously someone who deserves credit for lots of stuff that happens in the games industry. He's probably a god of programming, maths, physics, optimization, etc.
That said, he's also extremely nVidia-biased so I don't think we can't count on objective opinions from him. More: I don't really get if he's more pro-nVidia than anti-AMD.
I believe he is actually enthusiastic about GSync. Mostly because he has always admitted that, as a gamer, he prefers fast-paced dumb shooters than anything scenario/story-driven. Plus, he always championed for 60+ FPS above all so one can see how fixated he is with that.
So to me, it was predictable that he would dismiss Mantle as something unnecessary because he thinks that nVidia+OpenGL does the same and that he would prefer G-Sync to 4K.

Actually that is not true at all. G-Sync doesn't require 60+ fps. In fact, one of the key selling points of G-Sync is that gameplay is smoother when framerate drops below 60 fps. G-Sync and 4K are also not mutually exclusive. I'm sure John Carmack is looking forward to both.

This guy is going to have a bad time with all the AMD/ATI involvement he's going to have during this next console generation.

Carmack is focusing on PC, iOS, and Android. He is not focusing on next gen consoles at all at this time.

- Tim Sweeney is less of a fanboy than Carmack.. but he seems to be one nonetheless. For several times he states how they "mostly" use NV hardware for development. (What's the point in that?!? Whatever...)

Tim Sweeney and Epic actually buy the vast majority of their hardware. They use what they prefer, and certainly have every right to do so.

On a side note, you are incredibly condescending towards these industry veterans. G-Sync has been universally praised as a game-changing technology (literally and figuratively) by not just these three individuals but by numerous tech hardware review sites as well.
 
His involvement is what makes me think that G-Sync isn't a worthless gimmick, and it might be interesting. I'm still not going to buy it because I don't want to buy a monitor for a graphics vendor. The would be really stupid of me, since I tend to keep my monitors for almost 10 generations of graphics cards.


Hopefully it gets licensed and AMD, Intel would support it with a mere driver update. That's all speculative though, with no timeframe and I guess work would be quietly done. nvidia would want to keep it as a competitive advantage and AMD wouldn't want to bring attention to it until they have it ready.
You would eventually be able to use your G-Sync display on e.g. a Kaveri if things go well.
 
Back
Top