NVIDIA Game Works, good or bad?

I hate to poo-poo this because the gains of D3D12/Mantle and the like are legitimate, but this is a poor example. Clearly the only CPU bottlenecked cases there are on the very highest end GPUs, and all of them already run the game >60Hz. Unless you're on a 120Hz monitor, these performance differences are irrelevant.

Real gains are when you take something that wasn't hitting 60Hz (or 30 maybe, depending on the game type) consistently and make it do that. Unfortunately Mantle has no real effect on any of the setups that aren't already fast enough.

This is really why stuff that supports both Mantle and DX11 is never really going to be ground-breaking IMO... you can't do any of the crazy stuff if you need it to still work decently on DX11. From that point of view I think DX12 is a lot more relevant as it covers a broad enough range of hardware that in the near future people can start thinking about targeting it as a baseline.

There's no proof and I don't speak for AMD here, but by my observation the answer is no. Separate people work on Mantle, DX and OpenGL and their goal is to make their respective driver perform as well as possible.
Absolutely I don't think they are "intentionally" sabotaging anything, but you can't deny that people working on Mantle drivers are by definition not working on DX drivers :) In the case of stuff like driver engineering, more people really does just directly allow you to address more games, so there is a cost to splitting your focus, even if it's a small one. For my part, I just hope AMD is taking DX12 as seriously or more seriously than Mantle in terms of development time :)
 
If Mantle, DX11, and DX12 share the same shader code, and if that the place where most game specific driver optimization work happens (?), then a lot of this can be leveraged between the different APIs. So it's probably not too bad...

But that may not help those with older non-GCN cards if that compiler is different...
 
If Mantle, DX11, and DX12 share the same shader code
They probably will be similar to start with, but there's definitely more functionality available in the newer APIs that optimal implementations will make use of.

and if that the place where most game specific driver optimization work happens (?)
In my experience it isn't necessarily true. There are certainly some things that get done via shader replacement, but there's a lot of stuff that gets done more at the API use level like removing useless/redundant stuff and tweaking hardware stuff that isn't exposed (on-chip buffers and scheduling, cache allocation, etc.).

Furthermore, there may not be much opportunity for AMD to mess with Mantle shaders as IIRC I recall that the ISVs actually compile them down to hardware-specific code which helps load times but makes it tougher for the implementation to tweak. While I guess you could still entirely replace a shader, I think the spirit of Mantle is that they don't do that behind the ISV's backs, so I'd be surprised if they do right now, especially while they are obviously working closely with the current Mantle users.
 
As far as I know, there's nothing you can do in D3D that you cannot in Mantle.
Forcing AO for example (through RadeonPro), forcing negative LOD bias or custom post process techniques like AA, SweetFX ..etc.
I hate to poo-poo this because the gains of D3D12/Mantle and the like are legitimate, but this is a poor example. Clearly the only CPU bottlenecked cases there are on the very highest end GPUs, and all of them already run the game >60Hz. Unless you're on a 120Hz monitor, these performance differences are irrelevant.
Even the gains at 1600p were nearly none existent.
Thief-Mantle-DirectX-Benchmarks-2560x1600.png


That's why I am talking about increasing the gains with DX11, clearly Mantle has limitatoins due to it's constrained ability to only improve single threaded CPU overhead (for now). DX11 optimizations would benefit a lot of cases that fall outside this narrow window, and that's what NVIDIA did, improving performance beyond Mantle across different resolutions.
 
Umm, you do know that at that resolution you are going to be mostly GPU bound, even more so than with max settings at 1080p? And Mantle mostly helps out on the CPU side of things currently?
Yup, reading the rest of my post will actually indicate that I know that pretty well.:yes:
 
I hope this blows up and becomes a talking point for the industry; These practices are having a negative effect on the industry and need to be stopped.

Intel showing enough interest in Mantle to approach AMD is good to hear, though I doubt they would ever actually support it.


So weird that AMD can claim Mantle is open and people accept these claims. At least GameWorks is built on industry standards and runs on all GPUs. Very much unlike Mantle.
 
So weird that AMD can claim Mantle is open and people accept these claims. At least GameWorks is built on industry standards and runs on all GPUs. Very much unlike Mantle.

Mantle will be open, but it's still in it's beta.
Also, even if NVIDIA and/or Intel won't support Mantle once it's open, it doesn't affect their users at all, GameWorks affects AMD and Intel users.
 
This might be slightly off topic but since there is lots of DX11 AMD performance in CPU bound cases or lack there off discussion, I wonder if AMD actually gets to benefit much more from DX12 titles than Nvidia.

Nvidia already seems to be much less 'CPU bound'. In some cases by a huge amount. If DX12 addresses that, as it should, AMD surely gains a lot compared to DX11 codepath.
 
So weird that AMD can claim Mantle is open and people accept these claims. At least GameWorks is built on industry standards and runs on all GPUs. Very much unlike Mantle.

What a meaningless distinction. By this standard computer viruses and malware are perfectly acceptable as long as they are written in C++ or another "industry standard" programming language.
 
Mantle will be open, but it's still in it's beta.
If "open" here means "we won't legally prevent you from implementing the spec that we designed and will always control" then fine, but I have no credit for that what-so-ever. On that note, where's AMD's CUDA driver? Why were they so anti-industry by going and doing their own (inferior) stuff with OpenCL vs. just supporting NVIDIA's "open" standard? Why are they so prideful at the expense of their users? ;)

i.e. it's a BS distinction meant purely to mislead consumers. You can split hairs all you want but for all intents and purposes Mantle is a proprietary API and AMD is making no effort to evolve it into an industry standard.

That said, it's totally fine to have a proprietary API and I have no ill-will against AMD for it. But you don't get points for being "open" too.
 
NVIDIAs CUDA wasn't open back then, was it? IIRC it was behind licensing fees and NVIDIAs consideration on who can support it and who can't. In fact there's still some arguments whether it's still open [for other IHV's to implement support for their hardware] despite the LLVM-compiler out there.

Yes, AMD will hold control of the API, but it will be free for anyone to implement and the source code will be free, as well, which apparently should give others some room to wiggle (in the driver end?), if I understood it correctly from the interview
 
NVIDIAs CUDA wasn't open back then, was it?
I seem to recall them saying the same sorts of things but it's hard to find the relevant press now given how much stuff in between hits the same keywords...

Yes, AMD will hold control of the API, but it will be free for anyone to implement
Meh, that is the least interesting part of "open" and AMD knows it. You don't think Huddy knows perfectly well that no one in their right mind is going to sign up for an API designed by and for competitive hardware that could at any point change something that you can't support (efficiently or otherwise)? Of course he does, which is why there's no need to put up legal barriers. He can call it "pride" if he wants but in reality there are a lot of details (beyond how you map pixel shaders to SIMD lanes, ironically a fairly irrelevant point in terms of these APIs) that may or may not map well to various architectures and the Mantle design does not consider anything beyond GCN.

It's really this simple: you either care about portability and design for it and talk to other vendors or you don't. There's no half way in the middle where you design for your own product then later say "well maybe if you mess with it a bit you could kind of support it a bit efficiently, I guess?". That's pure PR. AMD could support Intel's register specs at the GPU interface level too (or vice versa) - it's all fully documented. Do both companies just have too much "pride" to do it? It's a silly argument.
 
It would have been more believable if they at least allowed the other GPU vendors to get access to the beta. But they don't even do that. Dave has posted here earlier that this is because they don't want to burden supporting too many people. I don't doubt that part of the reason, like 10% of it.

The benefits of Mantle are primarily lower CPU overhead. From a fundamental point of view, that means that there is a limited, fixed upside, limited by the case where everything is GPU limited. It also means that you very quickly run into a situation of diminishing benefits: you don't need a competing API to be as efficient: even if DX12 only addresses the majority of warts that Mantle promises to fix, it will be good enough in the cast majority of cases. (Those recent DX11 driver changes of Nvidia are a nice proof of that.)

AMD must realize this very well, and it probably means that Mantle will have a short lifespan. It's then better to milk it as long as they can without running the risk of others implementing it too. And make the API unavailable for others.

(I do wonder why Intel was so interested in Mantle. Do their GPUs ever run into a case where they are CPU limited? )
 
(I do wonder why Intel was so interested in Mantle. Do their GPUs ever run into a case where they are CPU limited? )

Intel are smart and despite popular belief AMD aren't stupid either, any API reducing the reliance on the CPU is going to cause the big boys to prick their ears up and listen.

No doubt Intel are perfectly happy for AMD to exists if only to keep the regulators at bay, as soon as they come up with something which might piss on their parade then you're damn right they would want to "experiment" with it.

As it stands, AMD should hand over all their hard work to their rivals soon enough.
 
Back
Top