AMD Mantle API [updating]

The posited direction of targeting major middleware providers has a possible multiplying effect on the ROI.
There are some rather ironic incentives for doing this, even if it does mean paying money.
Offloading more of the driver's traditional run-time work for most the game market could mean less work for AMD's frequently-mentioned driver team. If AMD has structured its development processes internally, some of what the developers do discover in their low-level work might wend its way back to the low-level work AMD does in its driver.
That would be worth paying someone for.

Sure, there could be gamer complaints that AMD doesn't pay enough attention to niche titles or things that aren't benchmarks. Not like today...

This is more of a business consideration, but what would it cost to get a major developer to pay attention in its back-end coding? 10-20 million dollars towards the development of a marquee title whose tech can be deployed elsewhere in a publisher's portfolio?
There's only a handful of major engines, as noted earlier.
Sadly, AMD has blown enough money to not use Globalfoundries in a year that it would have to pay for a decade of this before I would consider the effort ruinous.
 
I still will be surprised if they have to pay major engine devs anything. If Mantle can be used to produce near optimal code on both next-gen consoles (and it seems like that's the idea), wouldn't devs save optimization/coding effort and therefore money by using it for both PS4 and XB1 rendering paths? I don't expect moving between platforms to be completely trivial, given the different memory setups, but on that basis alone I expect decent uptake. Hopefully devs then get working, well performing code on an AMD APU/GPU fitted PC with not too much extra effort.

I'm curious if some of the smaller but still performance conscious game developers will end up writing Mantle code, but passing completely on DirectX/OpenGL because they don't have the resources for it.
 
The posited direction of targeting major middleware providers has a possible multiplying effect on the ROI.
Right, I think the situation is potentially a bit different than in the past these days since so many people use rendering middleware. If you could use it on the consoles as well that would further incentiveize multi-platform games, but it's unclear what Microsoft and Sony's response to that will be. It doesn't seem like either would be highly motivated to be supportive of that...

AMD could left some openings for expansion of API when some future different GPU architecture arrives. There is no point in creating Mantle if it will become obsolete in 5-6 years.
Sure, they could be thinking about forwards compatibility, but it depends how low level they went. I don't agree that it being obsolete in 5-6 defeats the purpose of it... games made using it will have already earned their money in that time, which is all they really care about (fine if it falls back on DX11 on those future cards that are expected to be faster anyways). It gets a bit stickier if they start implementing features beyond just performance because then you don't really want to lose those features, but if it's just performance, it's not a huge deal.
 
There's a bunch of questions I have about the reaction of Microsoft and Sony.

How angry would they be?
The next-gen consoles are coming out a notch or two below gaming systems already. In terms of GPU performance, the high water mark is around the level of the 7850.
Mantle is GCN-only, and it either makes the 7900 series more out of reach, the 7800 series at worst a bit faster if it isn't already, and it won't magic the 7700 series to be more than what it is.
The new AMD release coming out pushes things down a notch so that even more of it is 7900-level without including Mantle.
CPU-wise, anything remotely modern from Intel already beats the consoles soundly.

Is it clear they didn't know about this already? They bought into a platform whose maker was very clear about generalizing.

What are they going to do, pout and not buy APUs for their consoles that are launching in a month or less and need to be manufactured and shrunk for 10 years?
Are they going to take a stand against developers they've been praising, or the publishers they need to count on?
Will they not get an AMD APU in 7 or so years for the next consoles, if there are next consoles?

I guess Microsoft could do something to sabotage the next revision of Windows to somehow break Mantle, or refuse to certify hardware or drivers that support it. That'll further their image of not being a domineering platform holder that will retroactively destroy your games if it nets them a buck.
 
Maybe it's a case of rock and a hard place. With Valve a possible big player and massive install/user base already, would they really want to possibly push PC gamers off Windows and onto SteamOS by sabotaging Mantle on win?
 
do amd not support your monitor ?
pjliverppool

edit: didnt realise there was another page and soz for the ot
 
Yup, "Mantle" is not open at all really. Mantle can be used across different platforms (aka "open" platforms), but it is still an IHV-specific API for GCN-equipped products. Since Mantle requires some extra development work vs. industry-standard API's, and since Mantle is essentially unuseable for NVIDIA/Intel/Qualcomm/ImgTech/ARM GPU-equipped products, Mantle will ultimately fail to gain traction in the marketplace in my humble opinion. AMD will need to pay developers to use Mantle, and that strategy will only go so far considering their dwindling cash balance and heavy debt load.

AMD is really opening up a can of worms here. If all the top companies decided to pursue IHV-specific graphics API's, the result would be disastrous and chaotic to PC game development. And the ramifications go beyond just a chaotic environment, because as John Carmack said, Sony and Microsoft are surely going to be very upset about this.

I'm going to guess you are incredibly incensed that Nvidia continues to push their propretary GPU compute solution in PC gamng instead of embracing an open standard?

But either way to the task at hand, I don't see the problem as long as this is limted to performance increases. DX isn't going anywhere.

If Mantle starts offering effects in Mantle that aren't also then available in the DX version then we're just back to where Nvidia is with CUDA and PhysX. And haven't tons of people said, that hardware PhysX is great for gaming since you have to start somewhere? And if you don't have an Nvidia card, tough luck, go buy one? At that point Mantle would be no different than Nvidia's PhysX. If you want the extra eye candy, buy a supported card.

That said. I don't like it when PhysX offers something unavailable to "regular" users. And it's rare when that isn't the case. I can only think of Metro where the developer's went out of their way to make sure anything done with CUDA/PhysX were also available to people without the hardware to support that. So if Mantle goes in that direction, I'm not going to like it either.

But if it's just performance enhancements, then I'm fine with that. I'd still be able to pick my graphics card based purely on price (budget) versus performance without having to wonder if I'll miss out on X feature in game Y if I choose graphics card Z as I currently have to do with hardware PhysX based titles. Thank god hardware accelerated PhysX has been a relative failure with low adoption in games.

Regards,
SB
 
And haven't tons of people said, that hardware PhysX is great for gaming since you have to start somewhere? And if you don't have an Nvidia card, tough luck, go buy one?

Not on this board. The general consensus seems to be it adds something, but you can live without it And an IHV agnostic alternative would be much better
 
How angry would they be?
The next-gen consoles are coming out a notch or two below gaming systems already. In terms of GPU performance, the high water mark is around the level of the 7850.
Mantle is GCN-only, and it either makes the 7900 series more out of reach, the 7800 series at worst a bit faster if it isn't already, and it won't magic the 7700 series to be more than what it is.
The new AMD release coming out pushes things down a notch so that even more of it is 7900-level without including Mantle.
CPU-wise, anything remotely modern from Intel already beats the consoles soundly.

Is it clear they didn't know about this already? They bought into a platform whose maker was very clear about generalizing.

I don't think it will bother Sony or Microsoft at all. These are big companies who build and price their products to address market segments. They build and sell $400 consoles, which is not the same market as $1000+ gaming PCs.

If anything, Sony/MS should see that more companies making better games, more easily and cheaply regardless of platform will improve the games they get on consoles too. Games, and game companies become more viable if they can easily make and sell games into more markets, full stop.
 
There is another option - Microsoft didn't just know about it, they encouraged it. This scenario ends with MS buying AMD at some point in the nearish future.
 
So can we trust AMD to actually make a good Mantle driver? Crossfire without micro stuttering under Mantle?
 
The next-gen consoles are coming out a notch or two below gaming systems already. In terms of GPU performance, the high water mark is around the level of the 7850.
I think ESRAM in XBone makes that particular comparison foolish. Unless 290X is hiding its own ESRAM, PC space with Mantle is going to struggle against that particular architectural feature.

It's like comparing a CPU without L2 cache to one with. It's a huge differentiator.
CPU-wise, anything remotely modern from Intel already beats the consoles soundly.
Only if the consoles aren't running game code.
 
I think ESRAM in XBone makes that particular comparison foolish. Unless 290X is hiding its own ESRAM, PC space with Mantle is going to struggle against that particular architectural feature.

It still has 300+ GB/s. Brute forcing it's a possibility. And since we are talking about a long term commitment on this API, I expect that future AMD architectures will have some form of on-die SRAM.
 
Yup, "Mantle" is not open at all really. Mantle can be used across different platforms (aka "open" platforms), but it is still an IHV-specific API for GCN-equipped products. Since Mantle requires some extra development work vs. industry-standard API's, and since Mantle is essentially unuseable for NVIDIA/Intel/Qualcomm/ImgTech/ARM GPU-equipped products, Mantle will ultimately fail to gain traction in the marketplace in my humble opinion. AMD will need to pay developers to use Mantle, and that strategy will only go so far considering their dwindling cash balance and heavy debt load.

I see as the opposite. We will have more info soon, but it's safe to believe that Mantle is much more similar to Xbox One API and PS4 API than DirectX 11.1. Once a multiplatform engine is made for those two API, I think it will be a small effort to move it on Mantle.
What if AMD start selling the IP of GNC architecture? Most of games will be already optimize for it. They may start doing as ARM but in the GPU space.
 
I think ESRAM in XBone makes that particular comparison foolish. Unless 290X is hiding its own ESRAM, PC space with Mantle is going to struggle against that particular architectural feature.

Surely you're not suggesting that esram makes XB1 somehow comparable to a 290X in performance terms? I guess Haswell should be up there with Pitcairn then!

Only if the consoles aren't running game code.

In game code or any other type of code. They're just faster full stop.
 
It still has 300+ GB/s. Brute forcing it's a possibility.
Imagine GPU compute without local store. That's the scale of difference. Sure a load of compute algorithms don't even use LS, but those that do don't even have a meaningful fallback when LS isn't available.

Algorithms that break up into producer-consumer stages are going to change gaming on XBone, because the inter-stage buffering is going to be effectively free.

Haswell/Crystalwell is the only other consumer processor that can do that. And it doesn't have the FLOPs to make any meaningful difference, let alone the penetration.
And since we are talking about a long term commitment on this API, I expect that future AMD architectures will have some form of on-die SRAM.
I think that's possible. Would probably appear in AMD's APUs first, since Crystalwell points to the near future (2-3 years).

I wonder if Mantle can last that long.
 
Guys... there is a huge difference between being willing to disclose a spec for an architecture-specific API and designing a portable API. All indications are that Mantle is designed quite specifically for GCN, and I doubt AMD made any compromises for portability's sake (that was not the goal of it). Thus saying "well it's open and we'd let other people support it" is likely just PR nonsense, since they know very well that likely no one else can support it as it is defined.

That's too extreme a statement. We're not talking about exposing the capability to manipulate SIMD lanes on a machine code level. The low level (not the directly on the metal one) programming paradigm of current GPUs is quite alike over all architectures; binary command lists fe. are entirely incompatible in their content, but every architecture does use them, so offering the possibility to manage machine programs and state is still very general conceptually. And something DirectX doesn't support, even compiled shaders are virtual machine abstract until they hit the driver.

It's a narrowing of scope, from supporting every chip of the past future and his dog (DirectX), to supporting a kindred of high end architectures in a broad sense (Mantle).
From serving the needs of every program/OS/window-composer in the past future and his cat (DirectX), to supporting a kindred of high end game-engines (Mantle).

AMD talked about the ability to support exotic features of specific GPUs in addition. Which is totally compatible with the idea of a still abtract Mantle API. For CPUs we have special features available to every programer (AVX, ABI etc.), they can peacefully coexist with the standard x86 instructions. Whoever chooses to write optimized code-paths using one of those special features, can do so. And this example is fairly lowest level feature exploitation, I don't even expect Mantle goes down that far.

@ingeneral I don't understand the outcry or critisism about this initiative at all. I doubt AMD charges anyone extra for providing Mantle, that is it sucks up the cost of it's development entirely within the driver-team or the dev-rel program. As far as that is concerned, anyone couldn't care less if its used, or not, or if it's usefull or not. Especially end-users.
This API is for [us] programers, and noone else. We make the decision to play with it, and to use it if it works. And honestly (to say this a bit sharp) I wouldn't like anyone from outside of my team and the specific context I'm programming in, tell me for which otherworldly reasons (they don't know _anything_ at all about my context, they literally do live in another world than me) I shouldn't use it.

Cheers
 
Surely you're not suggesting that esram makes XB1 somehow comparable to a 290X in performance terms?
Smarter always beats brute force. If there's a smarter alternative, then 290X is dust. Not everything has a smarter alternative (texture sampling and filtering is a great example - it's a brute force problem that's already as smart as it can get).
I guess Haswell should be up there with Pitcairn then!
Problem for Haswell is who's going to code it to the metal?

In game code or any other type of code. They're just faster full stop.
Games on PC have to be written to run on two 2GHz cores. PC gaming is still a slave to the idea we'd be way past 10GHz single core CPUs by now.

That's why fast CPUs on PC have not been the key to the difference that PC gaming has held since XB360/PS3 appeared. Remember PCs were still mostly single core when those consoles appeared.
 
Last edited by a moderator:
I think ESRAM in XBone makes that particular comparison foolish. Unless 290X is hiding its own ESRAM, PC space with Mantle is going to struggle against that particular architectural feature.

It's like comparing a CPU without L2 cache to one with. It's a huge differentiator.
Only if the consoles aren't running game code.
AFAICT, Mantle only handles CPU side command submission and will not expose GDS, for instance.

OCL2.0 has primitives for Queues. I wonder if AMD will map them to RAM or some on die SRAM.

Consoles have it easy, they can assume only one game is running at a time. A pc gpu has to be open to the possibility of more than 1 gpu workload using it. So virtualizing on die SRAM becomes tricky. It would be great if something like it were exposed though.
 
Back
Top