AMD Mantle API [updating]

The former are TWIMTBP titles where you'd expect NVIDIA to lead, the latter are Mantle titles, where AMD's D3D11 performance isn't particularly relevant.

COD:AW launched on Xbox One, which was the featured platform. Although the PC version might be a TWIMTBP title, it's quite a stretch to say you'd expect Nvidia to lead. In fact, I keep hearing chatter about how AMD's console dominance will lead to PC dominance. Maybe later...
 
The former are TWIMTBP titles where you'd expect NVIDIA to lead,
This is not about leading, it's about impacting the user choice of CPU.
the latter are Mantle titles, where AMD's D3D11 performance isn't particularly relevant.
Disagree, we've already discussed how DX11 performance is still highly releveant and important. And instead of AMD improving it, it seems it's taking quite a few steps backward.
 
Impressive results for nV, so basically they seem to have completely negated any advantage Mantle had?
You can't really make such comparisons on "API level" when the game used is built for DX11's limits and capabilities
 
The former are TWIMTBP titles where you'd expect NVIDIA to lead, ...
Maybe. But that's of little relevance for those who buy GPUs to play games.

...the latter are Mantle titles, where AMD's D3D11 performance isn't particularly relevant.
If that's something AMD wants to hide behind, it's going to be a temporary reprieve.

I wonder: when you're going to disallow games from brand comparative benchmarks because they're TWIMTBP, the AMD equivalent, or because there exists a Mantle equivalent, how many blockbuster games are left?
 
One of the apparent downsides to this transitional period, and possibly long-term, is that Mantle giving the reins to developers means there is a shift in how much effort goes into iterating portions of the game+API+GPU system as a whole.

Now that the developer has so much more control, why should they care if AMD's benchmark scores don't beat Nvidia so long as the game is playable?
Nvidia can put more elbow grease into its driver that does all those complex optimizations.
Surely AMD can do the same thing, that is, unless it explicitly relinquished the opportunity to do things under the table by granting more explicit control...

So the conventional model lets Nvidia put its resources towards iterating improvements in a game's experience, while AMD's improvements are gated by how much the developer/publisher want to do.

The Oxide engine benchmark is an example of it. Nvidia iterated a bunch of times to optimize the almost anti-optimal philosophy of that engine, and Nvidia could show results several iterations deep to Oxide's one implementation of Mantle.
 
After reading the whole article, I find the results very surprising (didn't realize CPU still had such a big impact on game performance) and more damning than I expected.
Even with Mantle included, a $40 more expensive Intel CPU is not sufficient for an R9 290X to surpass an GTX970.

Going forward, the real interesting question is if DX12 will see as much improvement compared to DX11 for Nvidia as did Mantle (and DX12) compared to DX11 for AMD.

My guess is that it won't. If it does, then AMD is in real trouble.
 
There seems to be a few implications to the low-overhead and developer-controlled API model.
1) There is more choice and control, and "let's not and say we did" is one of them.
2) Just because an option exists doesn't mean you're capable of exploiting it well.
3) Low overhead does not manufacture peak performance if the architecture's peak isn't there.
 
Article reeks of Nvidia sponsored tripe. Yes let's focus on a subset of games that are Nvidia sponsored with lawyered up black box code that can't be optimized by AMD nor the developer even if they wanted to and for good measure let's ignore Mantle even exists as a testing option. This is what happens in titles where the developers optimize equally for both AMD and Nvidia.

http--www.gamegpu.ru-images-stories-Test_GPU-Action-Lara_Croft_and_the_Temple_of_Osiris_-test-lara_1920.jpg
http--www.gamegpu.ru-images-stories-Test_GPU-Action-Lara_Croft_and_the_Temple_of_Osiris_-test-lara_2560.jpg


DHEyVIi.jpg
 
One of the apparent downsides to this transitional period, and possibly long-term, is that Mantle giving the reins to developers means there is a shift in how much effort goes into iterating portions of the game+API+GPU system as a whole.

Now that the developer has so much more control, why should they care if AMD's benchmark scores don't beat Nvidia so long as the game is playable?
That's an interesting point. I don't know if it's true or not: it all depends on where typical optimizations are done. If the majority is in the shader compilation, or in how the chip internal memory hierarchy is being used, I don't think it will make a huge difference.

The Oxide engine benchmark is an example of it. Nvidia iterated a bunch of times to optimize the almost anti-optimal philosophy of that engine, and Nvidia could show results several iterations deep to Oxide's one implementation of Mantle.
I'm really not familiar with this engine or the way Nvidia optimized for it.

There seems to be a few implications to the low-overhead and developer-controlled API model.
1) There is more choice and control, and "let's not and say we did" is one of them.
2) Just because an option exists doesn't mean you're capable of exploiting it well.
3) Low overhead does not manufacture peak performance if the architecture's peak isn't there.
Did you miss a word in 1) ?

Another point: I suspect that these lower level interfaces will give the developers way more rope to hang themselves. We've already seen that Thief doesn't do Mantle well on Tonga, and that AMD is blaming the developers. I have no reason to believe the AMD is lying, but I would love to know what exactly it is that makes it so chip specific: all other GCN GPUs get better performance on Thief with Mantle.

Article reeks of Nvidia sponsored tripe.
Of course, of course. This is true that all articles out there that don't match your personal beliefs.

Yes let's focus on a subset of games that are Nvidia sponsored with lawyered up black box code that can't be optimized by AMD nor the developer even if they wanted to and for good measure let's ignore Mantle even exists as a testing option.
Except that they don't ignore Mantle: they include BF4 and Civ: Beyond Earth.

This is what happens in titles where the developers optimize equally for both AMD and Nvidia.
Of all games that are out there, you're using Lara Croft as a fair and balanced comparison?
 
To be fair, it's not that Lara Croft, it's Temple of Osiris with the isometric engine. I doubt it's a high end engine, though.
Ok, I stand corrected on that one then.

Anyway: it's probably futile to convince anyone that one or the other site is paid off or not by this or that party. Just as it is impossible to include the results of all games...

(My personal belief is that pay offs are way less common than some think it is...)
 
I'm really not familiar with this engine or the way Nvidia optimized for it.
It's some engine that was used to tout the high draw call count enabled by Mantle.
The developers want Absolute Freedom for game devs, which means all the batching and state re-use tricks needed to get around the cripplingly low number of draw calls that can be squeezed through standard DX11 that constrain flexibility in adding materials, properties, or effects to objects are to be avoided.

One possibility is that this leads to thousands and thousands of very similar or identical calls, and Nvidia was able to optimize some of the most obvious ones.
Nvidia's retort to Mantle was to provide the DX11 driver set that beat Mantle, and its PR included a set of functions they optimized, as well as a graph that showed how successive versions of Nvidia's driver chipped away at Mantle's lead.

Oxide spoke to the reasonable costs of implementing Mantle. They did not revise significantly beyond the first implementation.
Regardless of how good a tool is, it's tough for a first draft to survive three or four go-arounds by one of the leading optimization teams in graphics.


Did you miss a word in 1) ?
Several, I suppose.
1) There is more choice and control, and "let's not and say we did" is one choice.


Another point: I suspect that these lower level interfaces will give the developers way more rope to hang themselves.
Or more landmines to blame on the victim when they step on them.

We've already seen that Thief doesn't do Mantle well on Tonga, and that AMD is blaming the developers. I have no reason to believe the AMD is lying, but I would love to know what exactly it is that makes it so chip specific: all other GCN GPUs get better performance on Thief with Mantle.
Should we blame the devs? If they got it right with every other GCN chip, it looks like they were doing what should be expected of them. It seems like Mantle is exposing the reality that removing the barriers to low-level features means exposure to low-level flaws.
 
Maybe. But that's of little relevance for those who buy GPUs to play games.

The point is that you can't draw general conclusions from those two samples, they're not likely to be representative.

If that's something AMD wants to hide behind, it's going to be a temporary reprieve.

I wonder: when you're going to disallow games from brand comparative benchmarks because they're TWIMTBP, the AMD equivalent, or because there exists a Mantle equivalent, how many blockbuster games are left?

You shouldn't disallow them, just remember that they exist and be careful not to give undue weight to one category or the other when benchmarking.
 
Wasn't Mantle supposed to be open anyway? All this debate would disappear if AMD delivered on their promises.

June 2014:

"I know that Intel have approached us for access to the Mantle interfaces, et cetera," Huddy said. " And right now, we've said, give us a month or two, this is a closed beta, and we'll go into the 1.0 [public release] phase sometime this year, which is less than five months if you count forward from June. They have asked for access, and we will give it to them when we open this up, and we'll give it to anyone who wants to participate in this."

http://www.pcworld.com/article/2365909/intel-approached-amd-about-access-to-mantle.html

http://wccftech.com/amd-public-mantle-sdk-coming-year-nvidia-intel-free/
 
Back
Top