AMD GPU14 Tech Day Event - Sept 25'th

On current games, wouldn't it be true that a straight mantle ports won't give a major performance benefit because those games have learned the hard way not to use a lot of draw calls in the first place?
I think it'll be a while before a studio making a PC-only game with their own engine considered Mantle, precisely because of this.

Which games out there fall precisely into this bracket? One of my favourites does: ArmA. If ever there was an engine that needed a complete re-think then that's it. Not going to happen, either.

What other performance-sensitive games fall into this bracket?

If this is true, the real potential of Mantle is for rendering techniques that can only be done with many draw call that are currently avoided (and that will continue to be avoided for the DX version.)
Which is any technically high-end "console-first" game.

It's depressing what happened with one notable console-first game, Rage.
 
If the biggest advantage of Mantle is in fast draw calls by bypassing a whole bunch of Windows generic driver overhead (error checking etc.), then there's probably still quite a bit of value in making it look like a DX call while still bypassing the weight of the real DX layer?
I think this is a question of rendering-engine architecture. repi says that multi-threading of draw calls in D3D has been a total failure - performance hasn't improved. That seems to imply that to get past this problem requires a deeper change.

On the consoles, that deeper change is about symbiosis: access to GPU memory, command buffers and GPU state is truly fine-grained, enabling fine-grained draw call usage.

Constant buffers (from D3D10) are a nice example of a more fine-grained approach to rendering. Changing the constants supplied to a shader became minimal cost. As a side effect it took away draw calls, too.

The headline might read "less draw calls", but it was a change in engine to use a simple feature relating to constants that was actually required.

In other words, getting to "more efficient and more draw calls" requires fairly deep changes.
 
Caused by the overhead which mantle would eliminate?

Well I doubt it's black and white and I have no idea from a developer point of view whether they would prefer to use 20000 draw calls instead of 2000 but I'm not sure DICE and AMD would state up to 9x draw calls for the same performance if they didn't actually have test cases which showed that.
 
So, using those numbers, we would expect that it should be able to do ~18000 @ ~100fps, rather than 17000 @ 41fps, on a supposed best case scenario?
 
Shouldn't it be ~36k draw calls @ ~100 FPS if it enables you to get 9 times more draw calls per second, and you could now do 4k draw calls @ 101 FPS?
 
In our experience, DX11 deferred contexts give no performance gains at all, the opposite actually, so have completely given up on it which is sad but true.
See my slide #34 here:
http://www.slideshare.net/DICEStudio/directx-11-rendering-in-battlefield-3

Interesting! So current drivers are not as 'parallelized' internally as you expect them to be. Seems like a big task to completely change the driver architecture for modern multi-core CPUs. I also read the 'Batch Batch Batch' presentation your slide mentions which shows that CPU can't keep up with GPU if you keep increasing the number of batches (or drawcalls).

Mantle seems definitely a positive step to remove that bottleneck then. It reminds me of the 'Close to Metal' GPGPU API by AMD not so long ago. :smile:
 
Gaffer translated very interesting Dutch interview with AMD rep:
http://www.neogaf.com/forum/showpost.php?p=83960129&postcount=471

Skynner: “Because Mantle is part of Frostbite 3, the technology can be used in a lot of games in the coming year.”

Skynner: “Mantle will make it easier for developers to port games to the PC.”

Skynner: “Efficiency, performance and making a bridge between PC and consoles are the reasons why we have developed Mantle.”


Hardware.Info: “Zooming in on compatibility. By using Mantle, developers will be programming directly on the GCN shaders. Is this no problem for your future? What if you have to step over to a new architecture for GPU’s? Will current Mantle games still work?

Skynner: “Good point. I can say the following about it: just like developing API’s, we also set goals when we develop GPU’s. They have to be a certain amount faster than the previous generation. While I can’t talk about the future generations, you can imagine that compatibility with Mantle will be on the requirements list.”


Hardware.Info: “And then about its speed. Battlefield 4 will be released in October as a DirectX game. We will be getting an update in December which will allow us to run the game with Mantle. What kind of difference in performance can we expect?”

Skynner: “I can’t say anything about that yet. Don’t forget that Battlefield 4 is still in its Beta phase and the same applies to Mantle. We have Beta game code on Beta API code. This is not the stage where we can make statements about performance, whether we want it or not.”

Hardware.Info: “Can you talk about it globally at least? Are we talking about percent, tens of percents or even more than that?”

Skynner: “Let me say this: we won’t develop a completely new API just to get a 3 or 4 percent gain in performance. The performance gain will be significant.


:)
 
Last edited by a moderator:
I cant wait for those benchmarks. We'll finally be able to see how much performance PC's are leaving on the table thanks to DX. Really happy with AMD for this.
 
I'm looking out for details on what they will do (or not do) about maintaining Mantle backwards compatibility.
The abstractions kept a lot of design evolution from becoming too disruptive for software development, and that was one less factor holding back experimentation in GPU hardware design.

For the sake of a thought experiment, what would have happened if the need to comply with low level details of earlier architectures happened earlier?
What if the desire for ease of development across platforms lead to Mantle being introduced for the Xenos and R600 time frame?
 
3dilettante said:
For the sake of a thought experiment, what would have happened if the need to comply with low level details of earlier architectures happened earlier?
What if the desire for ease of development across platforms lead to Mantle being introduced for the Xenos and R600 time frame?

I doubt it would/will have a large impact either way. G80 was 7 years ago. I would guess AMD feels good enough about GCN (especially considering the console wins) to be content to iterate on it for several years.
 
I doubt it would/will have a large impact either way. G80 was 7 years ago. I would guess AMD feels good enough about GCN (especially considering the console wins) to be content to iterate on it for several years.

Depending on how low-level Mantle is, GCN would have needed to maintain design continuity with a Vec4+1 architecture or a VLIW5 with memory system incompatible with x86.
At least I didn't ask what would have happened if they started with R200 or R300.

As nice as GCN is, there are still things that need improvement, and if enough low-level things become part of Mantle, future changes become defined by all the old things they don't conflict with.
I'm hoping Mantle isn't so low level that it drops out of DirectX API abstractions and shoots past the level of HSA's virtual ISA, unless there is a clear demarcation between a virtual GPU and portions that are clearly implementation-specific.

A lot of lower-level things can be done without getting caught up in specifics that hopefully don't persist in the next GCN installments.

And finally, would anyone be interested in a "content" AMD? They tend to undershoot even when not treading water.
 
For the sake of a thought experiment, what would have happened if the need to comply with low level details of earlier architectures happened earlier?
What if the desire for ease of development across platforms lead to Mantle being introduced for the Xenos and R600 time frame?

Yeah this is my main concern about this. As great as I think it is, if it locks AMD into GCN fot the next 8 years then that's very bad. NV can probably afford to stick with DX in that case and just rely on architecture advancements to drive its performance forwards.

There's a clear balance that needs to be struck but mantle makes that one hell of a lot more interesting.
 
Yeah this is my main concern about this. As great as I think it is, if it locks AMD into GCN fot the next 8 years then that's very bad. NV can probably afford to stick with DX in that case and just rely on architecture advancements to drive its performance forwards.

There's a clear balance that needs to be struck but mantle makes that one hell of a lot more interesting.

It's not like AMD doesn't know what kind of architectures they are going to ship for next 3-5 years and be able to design mantle api accordingly. It's unlikely they would be stupid enough not to think about tomorrow.
 
So it's not as easy as writing Mantle on top of some intermediate nicely selected or developed ISA, is it? I've have no clue about graphics ISAs, but their abstraction power is somewhat bounded by data-flow inside the gpu or maybe other factors?
 
It's not like AMD doesn't know what kind of architectures they are going to ship for next 3-5 years and be able to design mantle api accordingly. It's unlikely they would be stupid enough not to think about tomorrow.

I think like you, and i dont think its much a problem for AMD to add new architecture in Mantle, make evolve it, why maintain coherency and even easy port on multi-plateform betwen old and new architecture.. Its not like the actual road have take AMD ( and Nvidia ) with GPU ( and GPGPU ) will change completely tomorrow. Ofc performance will maybe not be the same and old GCN GPU's, will not benefit of all the the advance of the new + Mantle ( 2-3 etc ).. But it is not allready the case anyway with actual API and desktop GPU ?

The cycle of developpement for GPU is long.... And next architecture on the work since a long time before been released to the public.. ( 4-5 years ).

Will it not be easier to implement new features or even make evolve GCN1 based console when you have the advantage of a low level API access ? And Sony and MS have the possibility to upgrade the APU /GPU in 3 years and release a PS4 v2 and XB One v2 with better GPU ?.. They have allready do it by the past... ( without saying with process advance, they can got some real gain on TDP, performance ) .

Mantle could evolve for include next architecture features, old architecture will not benefit of it, but this will benefit performance gain when you port them from a console GCN 1.0 to a PC GCN "3.0".

We speak about Mantle, but even when watching next architecture of Nvidia, Maxwell, Volta, with the possibility they include ARM based processors in their GPU for some task, Nvidia will need to have something rather low level hardware access, if they want programmers to take advantage of it, depend what road they take..
 
Last edited by a moderator:
Back
Top