AMD Mantle API [updating]

oh, i too: i have a 7950.

But, the point here is that we're talking about an entire API set that go into competition with dx, not physx, it's a potential danger for the industrial competition, and consequently for the final consumer, no doubt.

Mantle doesn't compete with D3D, it just supplements it.
 
oh, i too: i have a 7950.

But, the point here is that we're talking about an entire API set that go into competition with dx, not physx, it's a potential danger for the industrial competition, and consequently for the final consumer, no doubt.

Well all standards have to start somewhere, and it's often from companies that have the money to do the initial work. Who's to say that a low level API for those that want to hit the metal can't exist alongside a high level DirectX? It's of benefit to AMD's customers, regardless of what any competitor does or does not do.
 
the point here is that we're talking about an entire API set that go into competition with dx,
You have no point, all you need is help parsing the English language
What is Microsoft's reaction to the Mantle move ? Some would guess they like it, some would guess they hate you for it...

RK - I can't obviously comment on Microsoft, but the thing I can say is we have a great relationship with Microsoft. We are one of the best partners they have in terms of moving DirectX forward. We move DirectX forward, we work with Microsoft on every version of DirectX and we will continue to do that and obviously we wouldn't surprise them with anything like Mantle so you can read between the lines.

Microsoft has likely known Mantle was coming all along.
 
This is a VI thread, not a "is mantle evil?" thread. I don't think we have a shortage of those. ;)

I'm not sure you can consider the new cards to be simply a chip. They are a whole product that includes drivers (DX, OGL and Mantle), along with Truesound, and the whole eco-system that AMD are trying to build around it and off the back of their console domination.

All these surrounding features make it more attractive than without. It is finally evidence of AMD stepping up and going the extra distance that many customers (including myself) have been asking for.
 
Mantle doesn't compete with D3D, it just supplements it.

let's see in APU 2013

You have no point, all you need is help parsing the English language


Microsoft has likely known Mantle was coming all along.

really you can expect that Amd in public says: "hey Microsoft, it's war!!" :D

then of course the impact of mantle goes evaluated in perspective, may go the way of 3dfx's glide (i hope) or have a huge success who knows now?

I'm not sure you can consider the new cards to be simply a chip. They are a whole product that includes drivers (DX, OGL and Mantle), along with Truesound, and the whole eco-system that AMD are trying to build around it and off the back of their console domination.

All these surrounding features make it more attractive than without. It is finally evidence of AMD stepping up and going the extra distance that many customers (including myself) have been asking for.

This!
 
I'm not sure you can consider the new cards to be simply a chip. They are a whole product that includes drivers (DX, OGL and Mantle), along with Truesound, and the whole eco-system that AMD are trying to build around it and off the back of their console domination.

All these surrounding features make it more attractive than without. It is finally evidence of AMD stepping up and going the extra distance that many customers (including myself) have been asking for.

I hear you, but we already have two mantle threads. And remember this is the "3D Architectures & Chips" sub-forum. Mantle (or other drivers, software, etc.) don't quite fall under this category. To be clear, I'm not saying we can't discuss mantle, but could we do it in one place in the right sub-forum? I don't feel like I'm asking a lot here...

From here on out, off-topic posts we be baleted. I'll try to move these posts to the right thread when I have more time (my break only lasts so long!).
 
really you can expect that Amd in public says: "hey Microsoft, it's war!!" :D

then of course the impact of mantle goes evaluated in perspective, may go the way of 3dfx's glide (i hope) or have a huge success who knows now?

For what i have understand, its absolutely not a problem for MS and let alone Sony, who will even be on APU2013 for the presentation of Mantle. (along with some MS guy it seems )

But well lets the question, discussion about mantle in the software thread.
 
I would like to shed some more light on the issue of Draw Calls, it was mentioned in AMD press conference that developers want to increase the number of draw calls on the PC.

After some investigation , it is my understanding that developers chose to have lower numbers of draw calls with each draw call carrying larger body of data so as not to become CPU dependent . I see no real issue here .

On Consoles :
Many Draw Calls , Little Data for each one.
On PCs:
Few Draw Calls , Large Data for each one .

In the end things seem to equal out , however some developers want to increase the number of draw calls hence delve deeper into the territory of CPU dependency , I can only imagine two reasons for this :

1-Tap into more CPU performance to increase number of objects drawn on the screen, while at the same time putting Low-end and medium-end CPUs at danger.

2-Increase draw calls at the expense of data carried on each call to achieve some kind of flexibility in programming their games.

In either cases , I don't see how Mantle's would help comes into play.
 
I can only imagine two reasons for this :

1- Tap into more CPU performance to increase number of objects drawn on the screen, while at the same time putting Low-end and medium-end CPUs at danger.

2- Increase draw calls at the expense of data carried on each call to achieve some kind of flexibility in programming their games.

In either cases , I don't see how Mantle's would help comes into play.
I may misunderstand you, but how can't you see it? One can use more draw calls to achieve more flexibility without overwhelming the CPU. Or use even more draw calls on high end systems. You fullfill the second point with a reduced effect of the latter part of your first point (running into a CPU limit).
 
I find it particularly interesting the lengths people will go trying to paint Mantle in a bad light. I hope to see these same people say the same things when/if Nvidia releases their counterpart.
We've had the complete gamut, from nitpicking semantics to selective out of context quoting to theoretical examples without knowing full details. Congratulations "3D enthusiasts" though the name hardly fits you anymore.
 
I may misunderstand you, but how can't you see it? One can use more draw calls to achieve more flexibility without overwhelming the CPU.
So how can you do more draw calls without overwhelming the CPU ? in my humble knowledge draw calls stem from the CPU, how can you reduce CPU involvement in that matter through Mantle ?
 
So how can you do more draw calls without overwhelming the CPU ? in my humble knowledge draw calls stem from the CPU, how can you reduce CPU involvement in that matter through Mantle ?
Because one has less API requirements and restrictions and can fill the command buffer however one likes (as long as it results in legal code understandable by the GPU)? Less synchronization is enforced (is done explicitly only when one needs it and not implicitly for everything on each draw call). That kind of stuff. But I have to admit, that I'm not so deep into graphics programming.
But one example I keep hearing is that the D3D deferred contexts are not really working well. They are meant for submitting draw calls from multiple threads but in practice it is often not faster or even slower than if the engine reserves a single thread for doing all the draw calls (probably because some requirement of the D3D API implies some synchronization point so it simply doesn't thread well, no idea, but that can be observed in practice). If Mantle gets rid of this requirement, multi cores can in principle multiply the amount of possible drawcalls (probably even more than just a multiplier with the core count).
 
Last edited by a moderator:
After some investigation , it is my understanding that developers chose to have lower numbers of draw calls with each draw call carrying larger body of data so as not to become CPU dependent . I see no real issue here.

Developers do that because they have to work around the CPU dependency, it adds complexity to coding and is far less flexible. Reducing the CPU workload from draw calls is a simple solution.
 
I find it particularly interesting the lengths people will go trying to paint Mantle in a bad light. I hope to see these same people say the same things when/if Nvidia releases their counterpart.
Forums would be a better place if people with a victim complex stopped whining about how their favorite platform is being unfairly treated. It happens on both sides, but your confirmation bias just makes you blind to it.

(Don't worry: I'm going to resist the temptation to go through your posting history to see if you've ever blasted PhysX for being proprietary. ;) )
 
But one example I keep hearing is that the D3D deferred contexts are not really working well. They are meant for submitting draw calls from multiple threads but in practice it is often not faster or even slower than if the engine reserves a single thread for doing all the draw calls (probably because some requirement of the D3D API implies some synchronization point so it simply doesn't thread well, no idea, but that can be observed in practice).
Sounds like either an example of idiotic programming or a horrible bug ! draw calls can't exploit multi-cores? that is the definition of an epic failure right there !
 
Sounds like either an example of idiotic programming or a horrible bug ! draw calls can't exploit multi-cores? that is the definition of an epic failure right there !
There are spec problems that make it hard, compounded with drivers that try to play all sorts of games to get a leg up over competitors. To be fair, GPU hardware is not exactly helpful in how you have to give it commands (a lot of heavy lifting is still done on the CPU, even on consoles), so while the standard APIs need to improve here, there's improvements that could happen all-around, in both hardware and software.

That said, the alternative direction of simply removing the need for the CPU to spoon-feed the GPU by providing more expressive power and increased ability of the GPU to (efficiently) handle pulling more commands/data directly from memory is just as viable. As I've mentioned, stuff like bindless textures and indirect dispatch is moving in this direction to the point that you can draw large parts of the scene with one draw call if you want, without any decrease in flexibility.

It's simply a question of whether the CPU is responsible for going through a "thin" API that creates a "dense" command buffer with lots of state commands or whether the application basically sets up its own "command" format in memory, does a big dispatch and has the GPU pull whatever if needs from that memory. Probably as an industry we should pursue making both paths better, but to be honest we only need one way...

So again, it's unfair to claim that the rest of the industry hasn't been addressing this too. Don't you guys remember the threads about how many more draw calls you can get by using bindless textures (GL extension)? And that was even before all the new indirect stuff in GL...
 
Sounds like either an example of idiotic programming or a horrible bug ! draw calls can't exploit multi-cores? that is the definition of an epic failure right there !

The biggest single problem is that the programming model of DX and GL implicitly synchronize everything. Synchronization is expensive, especially so when the synch primitives need to travel across cores.

The problem isn't that the system is programmed badly -- it's that it's *designed* badly. All actions simply should not be implicitly synchronized.
 
Forums would be a better place if people with a victim complex stopped whining about how their favorite platform is being unfairly treated. It happens on both sides, but your confirmation bias just makes you blind to it.

(Don't worry: I'm going to resist the temptation to go through your posting history to see if you've ever blasted PhysX for being proprietary. ;) )
I currently use a physx card but I have to use their 2+ year old system software because Nvidia's anti consumer bullshit left me no option :LOL:

I don't see what that has to do with anything tho.

First news in 10 years bringing something fresh to 3d graphics instead of the same old baby steps keeping the status quo and milking consumers and people bend over backwards to discredit it. Thankfully actual developers asked for it and someone listened to them.
 
Folks this is a mantle thread. If you want to discuss if one ihv is more "open" than another, I'd suggest you start a thread in the proper sub-forum.
 
If I am not wrong Nvidia use something similar in its drivers, a common base and D3D/OGL on top.

What could do Mantle not posible with OGL extensions?

We will see DirectX 12 as a high level library on top of to the metal APIs?

Yea, I see Mantle as possible only because of AMD's somewhat novel position in the consoles. I can't imagine a Mantle incompatible with either current high-level API--that would seem highly counterproductive as I don't see developers ever clamoring for a 3rd high-level API, but rather a tool which will make programming for either high-level API that much easier and provide superior performance results at the same time.
 
Back
Top