AMD Mantle API [updating]

I'm curious about how the shader code gets compiled with Mantle. I'd assume they'd still need to use the DX or GL shader compiler to do the front-end compilation, which means if you use Microsoft's shader language, you'd still need to use Microsoft's front end compiler. Either that or AMD has had to duplicate the compiler front end?
 
Its supposed to support HLSL

Yes, I'm aware of that. But HLSL is a MS proprietary language. Does Mantle call DX APIs to compile HLSL? Or did AMD license the DX compiler to embed it in the Mantle runtime? Or did AMD clone the HLSL compiler to create their own, duplicate implementation? That's what I'm curious about.
 
Yes, I'm aware of that. But HLSL is a MS proprietary language. Does Mantle call DX APIs to compile HLSL? Or did AMD license the DX compiler to embed it in the Mantle runtime? Or did AMD clone the HLSL compiler to create their own, duplicate implementation? That's what I'm curious about.
They can compile HLSL also to their ISA directly without going through the DX ASM. I mean, there is no copyright stopping AMD from doing that. Some AMD tools (their shader/kernel analyzer jump to my mind) do that already, iirc.
 
They can compile HLSL also to their ISA directly without going through the DX ASM. I mean, there is no copyright stopping AMD from doing that. Some AMD tools (their shader/kernel analyzer jump to my mind) do that already, iirc.

Are you sure the tool isn't calling DX in the background to compile HLSL to DX ASM first?
 
Are you sure the tool isn't calling DX in the background to compile HLSL to DX ASM first?

I'm not an expert, but if you think about it, this may be a bit silly discussion. The way I understand it, AMD writes the drivers that implement the DirectX standards that Microsoft is basically only the curator of. So instead of talking to Its own DirectX compatible layer they can talk to mantle, which probably even shares a subset of the code for the DirectX HLSL implementation that AMD writes themselves, but with less overhead and more hardware specific features.
 
Some thoughts on Mantle and low level graphics APIs

When AMD announced Mantle, my first reaction to the idea of yet another graphics API, and a low level one at that, was negative. Like many others, I had visions of the problems in the early GPU days like happened with Glide.

After giving it some thought, I now feel very differently, and I am glad that AMD is heading in this direction (and perhaps NVidia as well in the not too distant future). These are my reasons:

The current gaming and 3d graphics market is completely different from the early days of 3dFX and Glide. In those days, there was one primary platform for 3d graphics and that was the PC (consoles were in their very early stages). There were more 3d hardware vendors than I could count (or remember) and no common graphics API. There was, however, one primary hardware platform and one primary operating system. Under these circumstances, dozens of vendor specific APIs all trying to operate on the same hardware and OS platform does not make good sense, and Glide was an example. Also, games were created from the ground up and the graphics engine was a custom part of each game, which would make supporting dozens of APIs, especially for the same platform unthinkable.

Today, the situation is reversed. There are only a few 3d graphics hardware vendors remaining (NVidia, AMD, Intel, Imagination, etc). There are now many different OS and hardware gaming platforms (Windows, Linux, IOS, Android, Playstation, and XBox for example). Also most games use commercial game engines written by just a few companies and used for many different games by many different game studios (CryEngine, UnReal Engine, Source Engine, etc). In such an environment, the game studios are largely isolated from the 3d graphics API, and its mostly the game engine companies that need to deal with it. As it is, they need to port their engines to a broad variety of OSs and hardware platforms. Having a few common low level APIs, specific to each GPU vendor, would not only greatly speed up the graphics, and add new features, but would probably actually lower the porting work by the engine companies, since the GPU companies are putting their graphics hardware on multiple OS/hardware platforms.

So it very likely will be a win, win, win, win for the GPU vendors, the gaming platforms, the game studio/engine companies, and the game customer. Better performance, more features, fewer ports, and more platforms.
 
Beyond Mantle

Taking it a step further, if GPU vendors start providing their own low level graphics APIs that are optimized for their hardware and available across multiple gaming platforms, then it is only a small step further for the GPU companies to add a gaming specific CPU on the GPU die and run the entire game on the GPU with it's ultra high speed memory.

This is almost the inverse of the APU, where the GPU is an add on to the CPU and is crippled by the CPUs memory and other restrictions.

In this way, a game (and entire game engine) would port completely from one hardware/OS plaform to another, with no code changes. In addition, the performance improvements and optimizations would be dramatic. The GPU specific API (Mantle, etc) would be adapted to provide a complete game production environment. It would be best, of course, if the GPU manufacturers selected the same open CPU (ARM almost certainly) to imbed in their GPUs and an agreed upon common set of graphics/game development tools.
 
Yes, I'm aware of that. But HLSL is a MS proprietary language. Does Mantle call DX APIs to compile HLSL? Or did AMD license the DX compiler to embed it in the Mantle runtime? Or did AMD clone the HLSL compiler to create their own, duplicate implementation? That's what I'm curious about.

Recent rulings from courts in the US and EU has establish that APIs and computer languages are not copyrightable.
 
Recent rulings from courts in the US and EU has establish that APIs and computer languages are not copyrightable.

Which is definitely a good thing. I'm less interested in the legal aspects, more in the political and technical aspects:
If Mantle calls the DX API behind the scenes to do compilation, it ties Mantle to MS' DX implementation.
If AMD wrote their own HLSL front end, it's potentially quite a bit of work to maintain it. Compilers can be rather subtle, especially if you're tasked with reimplementing one and ensuring the behavior of your clone is identical to the original.
 
I'm not an expert, but if you think about it, this may be a bit silly discussion. The way I understand it, AMD writes the drivers that implement the DirectX standards that Microsoft is basically only the curator of. So instead of talking to Its own DirectX compatible layer they can talk to mantle, which probably even shares a subset of the code for the DirectX HLSL implementation that AMD writes themselves, but with less overhead and more hardware specific features.

I could be wrong, but I thought DX drivers do not process HLSL, only MS' IR after the DX runtime compiles it.
 
I don't get why you think this is some difficult feat. This is essentially what happens with GLSL (OpenGL) already. I don't think skipping MS' IR is a very hard problem to solve (and as Gipsel has noted, they probably already "solved it" for their tools).
 
I could be wrong, but I thought DX drivers do not process HLSL, only MS' IR after the DX runtime compiles it.

Who knows? But it's still not rocket science even then. Even the Vita has Shader Model 3.0'+', which is afaik pretty much the same thing.
 
That part of the interview from the Digital Foundry had with two lead architect behind the xb1, makes me think that Mantle may not last for too long:
Digital Foundry: DirectX as an API is very mature now. Developers have got a lot of experience with it. To what extent do you think this is an advantage for Xbox One? Bearing in mind how mature the API is, could you optimise the silicon around it?
Andrew Goossen: To a large extent we inherited a lot of DX11 design. When we went with AMD, that was a baseline requirement. When we started off the project, AMD already had a very nice DX11 design. The API on top, yeah I think we'll see a big benefit. We've been doing a lot of work to remove a lot of the overhead in terms of the implementation and for a console we can go and make it so that when you call a D3D API it writes directly to the command buffer to update the GPU registers right there in that API function without making any other function calls. There's not layers and layers of software. We did a lot of work in that respect.
We also took the opportunity to go and highly customise the command processor on the GPU. Again concentrating on CPU performance... The command processor block's interface is a very key component in making the CPU overhead of graphics quite efficient. We know the AMD architecture pretty well - we had AMD graphics on the Xbox 360 and there were a number of features we used there. We had features like pre-compiled command buffers where developers would go and pre-build a lot of their states at the object level where they would [simply] say, "run this". We implemented it on Xbox 360 and had a whole lot of ideas on how to make that more efficient [and with] a cleaner API, so we took that opportunity with Xbox One and with our customised command processor we've created extensions on top of D3D which fit very nicely into the D3D model and this is something that we'd like to integrate back into mainline 3D on the PC too - this small, very low-level, very efficient object-orientated submission of your draw [and state] commands.
Though it still sounds like AMD would be to benefit from its work with MSFT on XB1 (time to market whereas Nvidia would have to play catch up really fast).
 
Last edited by a moderator:
I dont think shader compilation is a bottleneck so it doent matter if the dx runtime has to compile them
plus they can be pre compiled like battlefield 2 does.

Recent rulings from courts in the US and EU has establish that APIs and computer languages are not copyrightable.
soz for the ot but how come you need a x86 license, its only an instruction set after all ?
 
soz for the ot but how come you need a x86 license, its only an instruction set after all ?
If you want to provide a design compatible with an x86 20 years ago, it might be technically possible, although you might not be allowed to link any marketing to existing product trademarks.
If you want to make a performant and modern x86, the many implementation details can be patented or certain design features protected for a period of time.

Back on topic, there is no requirement that you get permission to code in x86 assembly.
 
soz for the ot but how come you need a x86 license, its only an instruction set after all ?
Well, Cyrix for example didn't need Intel's blessing, but they had to do all the legwork to make a x86-compatible architecture from the ground-up, carefully avoiding specific patent-protected parts.
Of course, that was during the 80s and 90s. Now the technology is so much more complicated that square-one start up like that is a pure madness.
 
Well I think you need to give NVIDIA some credit in terms of what they do in the GL extensions space. One could argue that if you use all of their bindless extensions, all the multi-draw indirect stuff, etc. ubiquitously, you're not really using much of core "OpenGL" at that point. AMD - for whatever reason - chose to call it something different rather than a (set of) GL extensions, but that doesn't mean you have to draw a hard line in terms of how the two IHVs are responding to the developer request.
Looks like AMD will do roughly the same with the OpenGL extensions.
According to Graham Sellers, OpenGL guy at AMD, the red team will be supporting this open API with some high performance extensions that will offer almost similar performance to AMD’s upcoming API, Mantle.
As Sellers claimed, AMD aims to expose all of the hardware of their GPUs with these upcoming high performance extensions of OpenGL, and gamers will be able to get close to theoretical peak and performance
But I admit, it's made up from a few tweets.
 
I don't get why you think this is some difficult feat. This is essentially what happens with GLSL (OpenGL) already. I don't think skipping MS' IR is a very hard problem to solve (and as Gipsel has noted, they probably already "solved it" for their tools).
AMD's older tools definitely called the D3D compiler for HLSL. There were options for the optimisation flags to use when calling the D3D compiler.

But that's all rather by the by, since there's essentially no meaningful difference between: HLSL, GLSL and OpenCL.

That isn't to say AMD is no longer capable of screwing up compilation. But no just-in-time compiler produces reliably good results.

It really would be nice to stop prioritising JIT compilation. I've been moaning about this for years now and there's still far too much junk coming out of them.

In fact I would go so far as to say that after 2 years, the "easy to build compiler tools" GCN is failing miserably at producing lean code. Simple concepts like holding register allocation below the magic figure of 128 still seem to be utterly alien. My mind boggles: "uh yeah, we don't give a damn if suddenly your code runs at half speed because we tripped over the 128-line".
 
Back
Top