AMD Mantle API [updating]

HBAO has been described enough in the original paper and follow-up tech reports for someone to implement it themselves. In fact, haven't folks already done this? The GameWorks implementation would just be for convenience; it's hardly required.

Furthermore, there are reasonable alternatives if someone doesn't want to implement it for some reason.
 
HBAO has been described enough in the original paper and follow-up tech reports for someone to implement it themselves. In fact, haven't folks already done this? The GameWorks implementation would just be for convenience; it's hardly required.

Furthermore, there are reasonable alternatives if someone doesn't want to implement it for some reason.

NVIDIA at least makes a clear distinction between HBAO and HBAO+
 
NVIDIA at least makes a clear distinction between HBAO and HBAO+
Yeah but they describe HBAO+ a fair bit (interleaved rendering+IIRC some temporal reprojection)... it's not hard to understand what's going on.

And like I said, ultimately it doesn't matter if it's "exactly the same pixel output as HBAO+ by NVIDIA"... the general principals are not a secret so implementing something similar from scratch is totally doable if desired.
 
http://www.firaxis.com/?/S=5845faee...eb5f6cb11/blog/single/why-we-went-with-mantle

Firaxis on Mantle, some picks:
1. What is important about Mantle?
Because Mantle is so new, and so different, the development cost is higher than normal. In order to understand why it’s worth it, you need to understand just how important Mantle is.

2. What does Mantle Buy You?
Simply put, Mantle is the most advanced and powerful graphics API in existence. It provides essentially the same feature set as DX11 or OpenGL, and does so at considerably lower runtime cost.
.
.
.
Civilization, it turns out, requires a significant amount of rendering to generate our view of the world, and that in turn means we are required to make many, many more draw calls than you might expect..
.
.
.
Besides being more efficient, core per core, Mantle also enables fully parallel draw submission (this has been attempted before, but never with the same degree of success). Until now, the CPU work of processing the draw calls could only by executed on one CPU core. By removing this limitation, Mantle allows us to spread the load across multiple cores and finish it that much faster.
.
.
.
Finally, the smallness and simplicity of the Mantle driver means that it will not only be more efficient, but also more robust. Over time, we expect the bug rate for Mantle to be lower than D3D or OpenGL. In the long run, we expect Mantle to drive the design of future graphics APIs, and by investing in it now, we are helping to create an environment which is more favorable to us and to our customers.
 
FYI - from (another) dev's point of view:

http://www.oxidegames.com/2014/05/21/next-generation-graphics-apis/

We heard nothing of the development of a new version of D3D12 until sometime after the early tests of Mantle were indicating that this radically different API model could work, and work well – data which we shared with Microsoft. What prompted Microsoft to redesign DX is difficult to pin down, but we’d like to think that it would never have happened if Mantle hadn’t changed the game. We are delighted in DX12’s architectural similarities to Mantle, and see it is a validation of the pioneering work that Oxide was privileged to be part of.

Does D3D12 mitigate the need for Mantle? Not at all. Though it is difficult to determine without more complete information on D3D12, our expectation is that Mantle will have the edge of D3D12 in terms of CPU performance. This is because the GCN architecture by AMD is the most general of all the current GPUs, and therefore the driver needs to do a bit less.
 
Dan said:
Admittedly, this was after advocating no API at all caused the hardware architects’ faces to pale a bit too much.
I do chuckle a bit at these sorts of claims (they are not uncommon). Developers have all the documentation that they need to write their own driver/interface for Intel and AMD hardware for a while now. Hell they even have a big head start from the open source Mesa stuff. But yet they don't... because realistically the claim based on the silly notion that game developers are rock-stars and driver folks have no idea what they're doing. Just for various reasons game devs are allowed to be mouthy about driver issues but driver guys have to silently fix 90% of broken games and get blamed for the other buggy 10% that they don't yet have workarounds for. Note that I am neither a driver developer nor a game developer, but I get to see both sides regularly.

Sometimes I do wish that someone would go ahead and write a path that directly targets (each version of) Intel and AMD hardware interfaces. I want to see how well that goes for users when they get a new piece of hardware and can't run the game (or run it slower) because the developers haven't written a path for it yet/ever. That said, I don't think it's actually worth ruining a game just to satisfy my small desire to see the practical reality of this play out :)

/rant

I'll also note that while I have a lot of respect for Dan, I don't agree with his view of how D3D12 was developed (i.e. what Dave quoted). As several IHVs and Microsoft have all stated, we've all been working on this stuff for a long while and similarities in design to Mantle are more due to GPU hardware design realities than anything else. Once you're beyond those broad strokes (i.e. GPUs have page tables and mostly read their state from memory these days) Mantle and D3D12 are *not* the same. I think it's also fair to say that D3D12 and Mantle both take the majority of their learnings from console APIs rather than coming out of the blue. Obviously a lot of the same sources were giving feedback to both Microsoft and AMD which also contributes to the similarities but fundamentally it's really hardware/physics stuff at work here.

But whatever, most of the history here is mostly irrelevant... the point is there will be a fast path for all modern GPUs on Windows. If you're somehow still driver limited in D3D12, you're almost certainly doing something stupid. I see no reason that AMD wouldn't be able to write a D3D12 driver that is as efficient as (or more) their Mantle driver, but time will tell.
 
Developers have all the documentation that they need to write their own driver/interface for Intel and AMD hardware for a while now. Hell they even have a big head start from the open source Mesa stuff. But yet they don't...Sometimes I do wish that someone would go ahead and write a path that directly targets (each version of) Intel and AMD hardware interfaces. I want to see how well that goes for users when they get a new piece of hardware and can't run the game (or run it slower) because the developers haven't written a path for it yet/ever.

You're missing the point. Just having the docs for a particular chip is not sufficient, and not what we've been asking for. Yes, we have the ability to write our own drivers for specific parts, but we don't have the bandwidth to maintain those drivers for every new part. We also don't have the ability to deliver new drivers on the day the hardware ships, and we don't have the ability to walk down the hall and talk to the hardware people when we run into undocumented quirks.

What is needed is something that is like x86 in that it is low level, and like DX/OGL, in that it is well-defined and forward compatible. If an IHV published an IHV-specific command protocol, built the necessary hooks into the major OSes, and committed to supporting it across future hardware generations, would they get support? We know they would, because Mantle already has.
 
Welcome Josh, nice to have you contribute here :)

You're missing the point. Just having the docs for a particular chip is not sufficient, and not what we've been asking for. Yes, we have the ability to write our own drivers for specific parts, but we don't have the bandwidth to maintain those drivers for every new part. We also don't have the ability to deliver new drivers on the day the hardware ships, and we don't have the ability to walk down the hall and talk to the hardware people when we run into undocumented quirks.
No, that is exactly my point. In the same paragraph people claim that that don't have the resources to do all these things but then implicitly dismiss them as easy problems and complain about poor driver engineering when there are bugs (even if they are in application code...). Turns out GPU hardware interfaces and designs aren't as stable as CPUs...

For reference, I was responding specifically to Dan's quote about wanting "no API"... you already have that. In reality, you really do want an API of some form, you just want it to solve all of the above problems while introducing zero overhead. And I'd love if games ran at infinite speed too, but I'll settle for 60 Hz for now and maybe 120 once I get my Oculus dev kit :)

What is needed is something that is like x86 in that it is low level, and like DX/OGL, in that it is well-defined and forward compatible.
So along with that are you willing to give up on any new features for 5 years? Are you willing to pay a cost for basically putting a driver in hardware? I'm not sure how much you know about x86 decoders but I don't imagine you want to slap one of those on the front of every execution unit on a GPU (RIP Larrabee :p, although it had a separate vector instruction set of course).

Thus you really have two options... either slow down/stop the pace of GPU innovation in the name of hardware interface convergence or continue to have a translation layer, even if it's fairly thin.

If an IHV published an IHV-specific command protocol, built the necessary hooks into the major OSes, and committed to supporting it across future hardware generations, would they get support?
There is nothing exciting or high overhead about the hardware command protocol itself. I see no reason to go lower level than what Mantle and D3D12 already do - they have already hit the major problematic driver points for modern GPUs.

And the whole forward compatibility issue is a bigger one than you might think. The proof is in the pudding there and won't be known for several years at least... but do you really think we're not going to run into the same issues of today's "low-level" APIs mapping poorly to GPUs 10+ years from now that we have today? The further "forward" you want the API to be compatible the more problematic it becomes in general, and PC gamers expect a huge tail given what DX/GL have been able to provide to this point.
 
I see no reason that AMD wouldn't be able to write a D3D12 driver that is as efficient as (or more) their Mantle driver, but time will tell.
They could start by writing a decent DX11 driver first. In the StarSwarm benchmark (oxide's own engine), NVIDIA cards wipe the floor with AMD cards in DX11 path, while also giving the Mantle path a run for it's money.
 
Last edited by a moderator:
Compared to what?

The number of games released and announced since its introduction, DX11, the number of fingers on my hands...

I had an argument with someone in another forum recently about how awesome Mantle (and low level PC API's in general) are and his counter argument was that its their impact on PC gaming as a whole is virtually nil. And with only two games and a tech demo currently supporting it and no other big game announcements that I'm aware of (engines don't count unless we also have word of games running on those engines that support the API) it was certainly a difficult argument to defend against. Perhaps there are some huge UE4 and CryEngine announcements just around the corner that your dying to tell us about - I truly hope there are, but right now all I see are big game announcements and launches (wolf, witcher 2, watch dogs, batman, ac unity etc...) and not a hint of Mantle support despite some of those games having huge CPU requirements and thus being perfect Mantle candidates.

We want more Dave, please make it happen :)
 
pjb, most developers didn't even know about Mantle before you did and it takes time to make games. Plus AMD was limiting the number of developers with access to Mantle as it wasn't close to being out of beta when it was announced.

If it hadn't been announced so early you wouldn't think the adoption is slow. I guess that's the double edged sword of doing a marketing push for something that wasn't ready for consumers yet.
 
The "most general"? :-|

I guess for instance the fact that mantle/gcn only have 2 types of buffers; memory (linary mapped) and images (shuffled for better spatial cache coherence), while at least dx11 has all different kinds; structured, append, vertex, index, etc which you can't alias (for instance a compute shader can't write to an index buffer), and I guess dx12 will need at least some of them to support all the wanted hardware.

Perhaps there are some huge UE4 and CryEngine announcements just around the corner

Cryengine was announced at gdc.
 
Welcome Josh, nice to have you contribute here :)

In reality, you really do want an API of some form, you just want it to solve all of the above problems while introducing zero overhead.


So along with that are you willing to give up on any new features for 5 years? Are you willing to pay a cost for basically putting a driver in hardware? I'm not sure how much you know about x86 decoders but I don't imagine you want to slap one of those on the front of every execution unit on a GPU (RIP Larrabee :p, although it had a separate vector instruction set of course).

Thus you really have two options... either slow down/stop the pace of GPU innovation in the name of hardware interface convergence or continue to have a translation layer, even if it's fairly thin.

We're probably always going to need an API of some kind, in order to allow the shader ISAs to evolve, and eventually to translate commands. But the API can be a lot leaner and closer to the hardware than it is today. A standardized command format would do the trick. This is what I mean by 'like x86' Not in the sense that its burned into hardware, but in the sense the the instruction stream is forward compatible.
 
Back
Top