NVIDIA Game Works, good or bad?

G-sync is probably inspired by eDP, which is a VESA standard. Nvidia took that and concocted a proprietary solution. It may make business sense, but it obviously takes away power from VESA. AMD apparently shoehorned a response from VESA with async.
Not just probably:
When asked about a potential VESA standard to enable dynamic refresh rates, Petersen had something very interesting to say: he doesn't think it's necessary, because DisplayPort already supports "everything required" for dynamic refresh rates via the extension of the vblank interval. That's why, he noted, G-Sync works with existing cables without the need for any new standards. Nvidia sees no need and has no plans to approach VESA about a new standard for G-Sync-style functionality—because it already exists.
Only it wasn't part of DP, but eDP before.
http://techreport.com/news/25878/nvidia-responds-to-amd-free-sync-demo
 
Yes, but only if the GPU controls all input to the monitor. Usually monitors have multiple inputs and an OSD to control brightness, etc. if you get rid of the scaler, you lose such functionality.

Im not saying get rid of the scaler just dont use it (use the scaler on the gpu) when connected via displayport
then amd control everything needed to enable freesync and monitor vendors dont have to do anything
 
Im not saying get rid of the scaler just dont use it (use the scaler on the gpu) when connected via displayport
the amd control everything needed to enable freesync and monitor vendors dont have to do anything

It'd be interesting if someone actually made a cheap monitor supporting only it's native resolution without any of these controls. It could work if bundled with a specially designed computer. It actually sounds like something Apple might do, but with the intention mainly on reducing the cost (which is not something normally associated with Apple :p ).
 
Display scalers are not the problem, stupid display scalers designed to use information they don't need to use in the first place are the problem.
 
It'd be interesting if someone actually made a cheap monitor supporting only it's native resolution without any of these controls. It could work if bundled with a specially designed computer. It actually sounds like something Apple might do, but with the intention mainly on reducing the cost (which is not something normally associated with Apple :p ).
The idea sounds better on paper than it behaves in practice. I have a 2560x1600 display without a scaler; the inability to feed it 1440p or 1080p sources was a massive pain in the neck.
 
The idea sounds better on paper than it behaves in practice. I have a 2560x1600 display without a scaler; the inability to feed it 1440p or 1080p sources was a massive pain in the neck.

The GPU can easily scale to 1080p and output a 1600p signal for many years now (discounting the fact that NVIDIA used to break this feature every other driver release). And the GPU scaler (at least NVIDIA's) is very good and very fast. Shitty monitor scalers are unnecessary.
 
It'd be interesting if someone actually made a cheap monitor supporting only it's native resolution without any of these controls. It could work if bundled with a specially designed computer. It actually sounds like something Apple might do, but with the intention mainly on reducing the cost (which is not something normally associated with Apple :p ).

My HP 30" has no scalar. It supports 2 resolutions I believe. Everything else operates with black bars surrounding them if the GPU doesn't do the scaling.

Regards,
SB
 
The GPU can easily scale to 1080p and output a 1600p signal for many years now (discounting the fact that NVIDIA used to break this feature every other driver release). And the GPU scaler (at least NVIDIA's) is very good and very fast. Shitty monitor scalers are unnecessary.
You would be right if all you ever used it for was hooking it up to a PC and using the default resolutions it made available. However this made it incompatible with FCAT testing, and getting AMD cards to do 2560x1440 with it was always "fun".

Having a scaler, even if you don't use it, makes life much easier.
 
Without saying we got the example today with BF4 and Thief, does thoses games run badly on DX11path ? Does Mantle have create a context where Nvidia was not able to optimize their drivers, work with Dice developpers for optimize the game for their GPU ? And does it have prevent them to release a DX11" special" driver who increase DX performance for thoses titles ? no.

Good question, BF4 does apparently run real bad on DX11 path. More specificly, AMD's DX11 path. AMD's DX11 performance in this GamingEvolved title is so bad that the gap with NV cards dwarfs the performance difference in any GameWorks title.

bf4mp_1920.png


Source: PCLab.pl

So what if I play with your question a bit:

Does Mantle have create a context where AMD was not able to optimize their DX11 drivers, work with Dice developpers for optimize the game for their GPU ? And does it have prevent them to release a DX11" special" driver who increase DX performance for thoses titles ? no.

Ever consider how ironic/hipocritical it looks for AMD to call NV out while shafting their own user -- the owner of HD 5000, 6000, 7000 series -- in these Mantle titles, which they dont have any problem accessing its source code?

Actually maybe it is more appropriate for AMD to thank NVidia for making their top dog CPU less suck compared to Intel CPU with all DX11 games out there that does not have Mantle port (like, 99% of them?)
*(I think this is also the point Carsten tring to make some posts back.)
 
HD7000 supports Mantle.

The fact that in some specific D3D11 games AMD's performance isn't up to par doesn't mean jack, there's plenty of D3D11 examples where NVIDIA isn't performing on the level "they should" either.

The "special" driver hasn't improved their performance in any significant amounts most titles, either, except in SLI.
 
Good question, BF4 does apparently run real bad on DX11 path. More specificly, AMD's DX11 path. AMD's DX11 performance in this GamingEvolved title is so bad that the gap with NV cards dwarfs the performance difference in any GameWorks title.

bf4mp_1920.png


Source: PCLab.pl

So what if I play with your question a bit:

Does Mantle have create a context where AMD was not able to optimize their DX11 drivers, work with Dice developpers for optimize the game for their GPU ? And does it have prevent them to release a DX11" special" driver who increase DX performance for thoses titles ? no.

Ever consider how ironic/hipocritical it looks for AMD to call NV out while shafting their own user -- the owner of HD 5000, 6000, 7000 series -- in these Mantle titles, which they dont have any problem accessing its source code?

Actually maybe it is more appropriate for AMD to thank NVidia for making their top dog CPU less suck compared to Intel CPU with all DX11 games out there that does not have Mantle port (like, 99% of them?)
*(I think this is also the point Carsten tring to make some posts back.)
IMHO:
AMD is is the minority of market, it has ~20% of iGPU and ~35% of DGPU markets, but they won the consoles, therefore there is a business sense to capitalise on this WIN in order to increase market share.
Developers already have a perfectly optimized shaders for GCN based consoles, so the only showstopper for AMD is slow DX11 driver with lack of deferred contexts support since the Civilization V(Sep 21, 2010), they don't have the resources to bother with DX11 driver support either, so the solution is create a new ctm api as close to those on consoles as possible and find developers with resources(or even big publishers like EA with a lot of resources) interested in the differentiation of it's games on PC. AMD stopped DX11 driver development and switched to the Mantle, unfortunately for AMD there are a lot of devs which will stay on DX11 untill DX12. Some of them work with nvidia, and it's a natural way for AMD to blame nvidia's gameworks for slow performance on AMD cards rather than blame it's own slow DX11 driver, by bashing nvidia in press they are also trying to distract other developers from developing on Nvidia cards since nvidia's DX11 driver is the fastest now(thanks to competition with Mantle). AMD will have very hard times with it's inferior DX11 driver if devs will continue develop on nvidia cards like Ubisoft did. HBAO+ is disabled by default in WatchDogs(there is no any extra penalty from it on ultra settings), TXAA is unavailable for AMD GPUs and all shaders ported from consoles(and perfectly optimized for GCN - http://michaldrobot.files.wordpress.com/2014/05/gcn_alu_opt_digitaldragons2014.pdf ), HBAO+ itself is cache coherent and so much faster then HDAO(you won't find this slow AMD shader on consoles), the only company to blame for slow performance on AMD cards is AMD
 
Last edited by a moderator:
Good question, BF4 does apparently run real bad on DX11 path. More specificly, AMD's DX11 path. AMD's DX11 performance in this GamingEvolved title is so bad that the gap with NV cards dwarfs the performance difference in any GameWorks title.
Even AMD's own testing reveals a problem, look here: In Thief a lowly regular 780 is able to blast past a 290X using DX@1080p!

AMD-Thief-Mantle-Test.png
 
Even AMD's own testing reveals a problem, look here: In Thief a lowly regular 780 is able to blast past a 290X using DX@1080p!

AMD-Thief-Mantle-Test.png

That's fairly natural: NVIDIA surely worked on optimizing their D3D drivers for Thief. In all likelihood AMD didn't, and why would they when AMD users have Mantle?
 
Good question, BF4 does apparently run real bad on DX11 path. More specificly, AMD's DX11 path. AMD's DX11 performance in this GamingEvolved title is so bad that the gap with NV cards dwarfs the performance difference in any GameWorks title.

bf4mp_1920.png


Source: PCLab.pl

So what if I play with your question a bit:

Does Mantle have create a context where AMD was not able to optimize their DX11 drivers, work with Dice developpers for optimize the game for their GPU ? And does it have prevent them to release a DX11" special" driver who increase DX performance for thoses titles ? no.

Ever consider how ironic/hipocritical it looks for AMD to call NV out while shafting their own user -- the owner of HD 5000, 6000, 7000 series -- in these Mantle titles, which they dont have any problem accessing its source code?

Actually maybe it is more appropriate for AMD to thank NVidia for making their top dog CPU less suck compared to Intel CPU with all DX11 games out there that does not have Mantle port (like, 99% of them?)
*(I think this is also the point Carsten tring to make some posts back.)

Other review dont see that.. i dont know how this site arrive to this.
 
That's fairly natural: NVIDIA surely worked on optimizing their D3D drivers for Thief. In all likelihood AMD didn't
Exactly the point.
and why would they when AMD users have Mantle?
Because of other hardware that doesn't support Mantle? or because of other DX features that Mantle has yet to implement?

The real question here is: Is AMD intentionally hobbling DX (through optimizations omission) to improve the image of Mantle?


These are 3 months old, well before NVIDIA latest optimized drivers.
 
That's fairly natural: NVIDIA surely worked on optimizing their D3D drivers for Thief. In all likelihood AMD didn't, and why would they when AMD users have Mantle?
Exactly.
I understand that owners of non-Mantle supported AMD GPUs won't be too happy about this, but from a business point of view, you can't blame AMD for spending its (very limited source code absolutely needed) optimization resources on GPUs that actually show up in benchmarks.
 
Exactly the point.

Because of other hardware that doesn't support Mantle? or because of other DX features that Mantle has yet to implement?

The real question here is: Is AMD intentionally hobbling DX (through optimizations omission) to improve the image of Mantle?

These are 3 months old, well before NVIDIA latest optimized drivers.

As Silent_Guy pointed out, non-Mantle GPUs (i.e. non-GCN ones) barely show up in benchmarks anymore. Plus, since they're VLIW4/5, they probably require somewhat different kinds of optimization anyway.

As far as I know, there's nothing you can do in D3D that you cannot in Mantle.
 
The real question here is: Is AMD intentionally hobbling DX (through optimizations omission) to improve the image of Mantle?
There's no proof and I don't speak for AMD here, but by my observation the answer is no. Separate people work on Mantle, DX and OpenGL and their goal is to make their respective driver perform as well as possible. As you can imagine GCN gets more attention than older architectures.

Look at OpenGL for example. AMD's OpenGL team sees Mantle as a challenge and even partnered with other vendors to present at GDC. I'm sure there are exceptions, but most engineers (not just at AMD) take pride in what they work on and want it to be the best it can be.
 
Back
Top