Follow along with the video below to see how to install our site as a web app on your home screen.
Note: This feature may not be available in some browsers.
Yes , Ubisoft didn't announce any GPU PhysX game now or in the future.So your saying that no future one will, since that is what my question was relating too?
With triple buffering: the lag will be less, but the simulation time stamps can be spread all over the 16ms of the frame -> lower lag, but less smooth, since the delay between internal time stamp and pixel visible on screen is now variable instead of fixed.
At least in theory: according to many reports on the web, triple buffering led to increased lag is many cases, which is something I don't understand.
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.
Did anyone actually play a game on this?
Would you agree this thing, at constant 50fps, will give a way better experience than before?Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth. Did anyone actually play a game on this?
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.
Did anyone actually play a game on this?
This thing is such a no-brainer...define noise = expected frame time (say 30ms/frame) - actual frame time. Without gsync, noise sources are input lag and mismatch between GPU and display. With gsync, noise source is input lag. You are reducing total power in noise signal, ergo, noise amplitude will go down.
Yes but does g-sync work over dsub![]()
Unless framerate is constant, you have stutter. It looks smooth but will have varying input lag, so won't feel smooth.
Did anyone actually play a game on this?
Because nowadays the traditional triple buffering appears to be not always (rarely?) used. Instead of having two backbuffers and the engine free running and changing between them while at each vsync the latest completed backbuffer is flipped to front, one has a queue of 3 buffers increasing the lag. I don't get how someone could come up with that.Wrt triple buffering, the way I understand it:
in double buffering with vsync, if you can keep up with 60fps, the games will be rendered at 60fps and the game engine's internal simulation timer will be locked to it -> smoothness, but guaranteed 16ms lag: even if your GPU is done in 1ms, you'll need to wait 15ms for the next refresh to start.
With triple buffering: the lag will be less, but the simulation time stamps can be spread all over the 16ms of the frame -> lower lag, but less smooth, since the delay between internal time stamp and pixel visible on screen is now variable instead of fixed.
At least in theory: according to many reports on the web, triple buffering led to increased lag is many cases, which is something I don't understand.
Actually, it should not. Especially if the framerate just drops below the refreshrate, triple buffering should in fact lower the average frame latency compared to vsynced double buffering. If that's not happening, it's not real triple buffering.In my experience it does produce severe lag in many games , the cases where it doesn't produce lag are the exceptions (like Max Payne 2).
I think any form of frame queuing is bound to cause lag , because immediate player interaction with the frame is delayed, and triple buffering is a form of frame queuing.
That's indeed my understanding. Lower average lag. But more erratic visual motion.Actually, it should not. Especially if the framerate just drops below the refreshrate, triple buffering should in fact lower the average frame latency compared to vsynced double buffering. If that's not happening, it's not real triple buffering.
So developers mistake queuing with Triple Buffering?If that's not happening, it's not real triple buffering.
Adaptive Vsync enables Vsync above the threshold of 60Hz, but disables it below. So below 60Hz you do get tearing. But it has the advantage over pure Vsync that the lag doesn't go sky high when you're below 60fps.Anyhow , I have a hard time distinguishing the advantage of G.sync over the traditional Adaptive V.sync that NVIDIA itself introduced! Both remove visual tearing but input lag still remains .. so what gives?
'will do just fine' != perfect. With 100Hz refresh and vsync on, you immediately fall back to 50Hz when you don't quite make it. And with vsync off, you still get tearing. It's going to be less noticeable. Good enough probably for you, me, and many others. But it's fundamentally still there. When you spend $700+ on a GPU, it's not unreasonable to expect perfection.And Jawed is right , a monitor with a 100+ Hz will do just fine ! without the hassle of a crappy v.sync or otherwise
Damn it , forgot about that ! I guess my head is spinning with all the v.sync, g.sync, triple buffering, queuing, tearing above and below refresh rate .. etc. It's a bloody mess !Adaptive Vsync enables Vsync above the threshold of 60Hz, but disables it below. So below 60Hz you do get tearing.
I guess the deciding factor here is that no modern game would push even close to 90fps at 1080p with all the eye candy settings no matter what single GPU is used .. even multi GPUs don't usually achieve that. Still these games are bound to get old and render like crazy on future GPUs, so I guess the value of G.sync in this case is future proofing -ironically- against old games ! :smile:It's going to be less noticeable. Good enough probably for you, me, and many others. But it's fundamentally still there. When you spend $700+ on a GPU, it's not unreasonable to expect perfection.
I would say in principle yes. But I guess one can give large part of the credit to MS' swapchain design in DirectX, which is designed as a queue.So developers mistake queuing with Triple Buffering?
I think NVIDIA could have made more money by implementing PhysX in OpenCL or Compute Shaders. A lot more games would have used it, and since NVIDIA would have been in control and able to finely tune it for their own architecture, it would have favored them in benchmarks, hence the competitive advantage.
nVidia bought AGEIA in February 2008. OpenCL 1.0 was officially released 10 months later.OpenCL and ComputeShaders didn't yet exist when GPU PhysX was born. OpenCL even today still isn't ready for prime time.
USB, SSE, x64 and so on became successful and universal because they're NOT proprietary. The same thing goes for the entirety of the PC (except intel's been killing off all of its other competitors one by one over the years, but that's a different discussion.) Proprietary = dead, or at best, languishing. Free, and at least decently useful at its designed task = ubiqutous and popular and successful and...not dead.![]()
- John Carmack is obviously someone who deserves credit for lots of stuff that happens in the games industry. He's probably a god of programming, maths, physics, optimization, etc.
That said, he's also extremely nVidia-biased so I don't think we can't count on objective opinions from him. More: I don't really get if he's more pro-nVidia than anti-AMD.
I believe he is actually enthusiastic about GSync. Mostly because he has always admitted that, as a gamer, he prefers fast-paced dumb shooters than anything scenario/story-driven. Plus, he always championed for 60+ FPS above all so one can see how fixated he is with that.
So to me, it was predictable that he would dismiss Mantle as something unnecessary because he thinks that nVidia+OpenGL does the same and that he would prefer G-Sync to 4K.
This guy is going to have a bad time with all the AMD/ATI involvement he's going to have during this next console generation.
- Tim Sweeney is less of a fanboy than Carmack.. but he seems to be one nonetheless. For several times he states how they "mostly" use NV hardware for development. (What's the point in that?!? Whatever...)
His involvement is what makes me think that G-Sync isn't a worthless gimmick, and it might be interesting. I'm still not going to buy it because I don't want to buy a monitor for a graphics vendor. The would be really stupid of me, since I tend to keep my monitors for almost 10 generations of graphics cards.