AMD Mantle API [updating]

That is incorrect. Today, low end discrete graphics make up a relatively small % of market share for NVIDIA, and certainly nothing close to a "majority".

Obviously you have the numbers proving it. Or "my ass".

Intel Iris Pro already outperforms AMD integrated graphics in some games. The performance of Intel integrated graphics is also increasing at a much faster rate than AMD integrated graphics.
I'm sure AMD will be really concerned about Iris Pro the day that games devs start to cater for that 0.00001% of users.

Nope. High quality gaming would be NVIDIA discrete graphics or AMD discrete graphics, and AMD is nowhere near half the user base. Choice of CPU is somewhat important to gaming too, and Intel is the overwhelming and preferred choice there too.
Do you ever give actual numbers backed by sources ams? Ever?
 
Last edited by a moderator:
Your 80%, 85% and 90% are still wrong. As of August 2013:

Capture.PNG

Look, this isn't rocket science. For the nth time, what I said is that for pre-built systems that use Intel CPU's AND discrete graphics, NVIDIA GPU's are used the majority of the time (>90%). Even if the percentage is overestimated (I think the 90% number I mentioned earlier is valid for Intel-powered laptops with discrete graphics, but may not be valid for desktops), it is irrelevant because the actual percentage is so overwhelmingly high. What you quoted above is add-in-board market share with zero regards to CPU. The actual percentage of pre-built Intel CPU systems with NVIDIA discrete graphics inside is way higher than the 60+% seen above because NVIDIA's market share is disproportionately skewed towards Intel CPU's when used in pre-built systems. This is not a difficult concept to grasp: in pre-built systems that use Intel CPU's + discrete graphics, NVIDIA is by far the overwhelming favorite in this day and age.

Let me give an example. Let's say that 75% of gaming PC's built with discrete graphics have Intel CPU's, and 25% have AMD CPU's (which is probably pretty realistic). If 80% of Intel CPU systems have NVIDIA graphics (the remainder with AMD graphics) , while 10% of AMD CPU systems have NVIDIA graphics (the remainder with AMD graphics), then NVIDIA's market share would be 62.5% (which is very close to the AIB market share listed above). So 90% is probably an overestimate, but 80% is pretty realistic.

Dell Optiplex desktops are 90% Intel processors and there are no Nvidia discrete options whatsoever across the whole line. The only options when upgrading from IGP are AMD cards.

Users of Dell Optiplex rely almost exclusively on Intel integrated graphics. How many Optiplex do you think are actually sold with Intel CPU + AMD GPU? Probably not very many. And FWIW, Dell's XPS series is loaded with Intel CPU's + NVIDIA GPU's.
 
Last edited by a moderator:
Look, this isn't rocket science. For the nth time, what I said is that for pre-built systems that use Intel CPU's AND discrete graphics, NVIDIA GPU's are used the majority of the time (>90%). Even if the percentage is overestimated (I think the 90% number I mentioned earlier is valid for Intel-powered laptops with discrete graphics, but may not be valid for desktops), it is irrelevant because the actual percentage is so overwhelmingly high. What you quoted above is add-in-board market share with zero regards to CPU. The actual percentage of pre-built Intel CPU systems with NVIDIA discrete graphics inside is way higher than the 60+% seen above because NVIDIA's market share is disproportionately skewed towards Intel CPU's when used in pre-built systems. This is not a difficult concept to grasp: in pre-built systems that use Intel CPU's + discrete graphics, NVIDIA is by far the overwhelming favorite in this day and age.

And the numbers and links proving these facts are where?
 
Look, this isn't rocket science. For the nth time, what I said is that for pre-built systems that use Intel CPU's AND discrete graphics, NVIDIA GPU's are used the majority of the time (>90%). (...) This is not a difficult concept to grasp: in pre-built systems that use Intel CPU's + discrete graphics, NVIDIA is by far the overwhelming favorite in this day and age.

What everybody is saying is: at most you are saying that NVIDIA "should" or "could", not that it "is" "by far the overwhelming favorite in this day and age".
 
I prefer using AMD CPU + nvidia graphics card (now, if only AMD had something worth upgrading from an Athlon II, LOL!) and lots of people are building a PC with Ivy Bridge or Haswell plus something from the Radeon R9 series.

Why so serious about your made up idea.. And who gives a shit. It's not like OEMs or consumers care anyway, nor is AMD in a position of telling the OEMs "you'd better use our CPUs if you want to use our GPUs, or else.."
 
It is totally logical that most system builders today that offer Intel CPU's would want to pair them with NVIDIA GPU's when offering discrete graphics.
I don't see the logic in this at all. The only reason for this to be true is if Intel and Nvidia give better pricing for choosing this combination and I doubt that happens.
 
Intel CPU's with Intel iGPU's are a total irrelevance. They aren't good enough to be considered part of the gamer market. Intel doesn't have 60%+ of the graphics market because of graphics, it's because their graphics are tied to their CPU's.
You can mess around with the definition of "gamer market", but it's clear from the steam and unity hardware surveys that lots of folks are gaming on Intel GPUs. And you can't draw any sort of sensible line that includes AMD APUs (and TBH, a pile of low end discrete too) but not Intel ones.

Frankly your whole "true gaming" line is just an arbitrary line so you can cherry-pick and try and blur the reality. Hell I don't use anything slower than a GTX 680 in any of my PCs currently, so why not just draw the line there since that means I probably can just reject everything the rest of you guys say, right? How many have Titans or 780s? :p

I don't really see the point in this continued bickering. It's clear that a chunk of games will use Mantle even if only DICE adopts it and it doesn't really matter who else does TBH since Frostbite alone covers a good minimum bar. It remains to be seen how much Mantle itself gets pushed vs. using it as leverage to affect the portable APIs, but most of the rest of that conversation really needs to be put off a few weeks until we get some more information on how GCN-specific it actually is.

In the meantime I really would love to see more discussion on whether or not devs really think draw call overhead is that big a deal with bindless, etc. on the horizon. That's a conversation we can have today. Personally I think we should keep pushing both directions (lower overhead *and* fancier submission) but somewhere in the marketing the whole question of whether or not 9x as many draw calls is really something fundamentally necessary to get a certain image or more just to ease porting because the consoles can do it has gotten glossed over.

As an aside - and I'm pained to have to be the one to say this - we as enthusiasts really need to stop the arguing about different IHVs on PCs. At this point it's not AMD vs NVIDIA vs Intel or even PC vs console... the continued existence of high end gaming is really PC/consoles vs. mobile. The latter is just too big to be ignored by anyone, even the shops that really want to just put out AAA console games or high-end hardware. We're going to have to band together here folks and promote the continued use of high-end platforms (i.e. anything above tablet) together, in any of its forms. There is a real threat this time...
 
Last edited by a moderator:
As an aside - and I'm pained to have to be the one to say this - we as enthusiasts really need to stop the arguing about different IHVs on PCs. At this point it's not AMD vs NVIDIA vs Intel or even PC vs console... the continued existence of high end gaming is really PC/consoles vs. mobile. The latter is just too big to be ignored by anyone, even the shops that really want to just put out AAA console games or high-end hardware. We're going to have to band together here folks and promote the continued use of high-end platforms (i.e. anything above tablet) together, in any of its forms. There is a real threat this time...

Well said.
 
A fallacy? It is totally logical that most system builders today that offer Intel CPU's would want to pair them with NVIDIA GPU's when offering discrete graphics. On the other hand, it is also totally logical that most system builders today that offer AMD CPU's would want to pair them with AMD GPU's when offering discrete graphics.

Saying that something is obviously logical doesn't make it so. Especially when it's absurd.
 
Enough about marketshares, pre-built systems and all that stuff...

Much better subject is preformence increase that Mantle will bring. Its a shame that nobody wants to commit to some number, except few vague AMD mentions that it wont be singledigit percentage.
 
Enough about marketshares, pre-built systems and all that stuff...

Much better subject is preformence increase that Mantle will bring. Its a shame that nobody wants to commit to some number, except few vague AMD mentions that it wont be singledigit percentage.

Apparently we still have to wait about 3 weeks for that. Certainly look forward to it though.
 
Enough about marketshares, pre-built systems and all that stuff...

Much better subject is preformence increase that Mantle will bring. Its a shame that nobody wants to commit to some number, except few vague AMD mentions that it wont be singledigit percentage.

There's a more in depth presentation who will occur in November, there will be conference, discussion, presentation about it, there will developpers there etc. I dont see why AMD should launch numbers in the wild before this conference.

AMD developers, games developers and other developers have surely many contact and non official discussion todays about Mantle, performance and other aspect who touch it.
 
Last edited by a moderator:
They don't have to. Not for the Mantle version of BF4, at least.
They could just implement an "Epic Battle" mode that shows an enormous amount of NPCs/vehicles/debris and plays smoothly using Mantle + dirt-cheap APU or requires a $1000 overclocked CPU in a $300 motherboard to get the same results in the non-Mantle system.

Yes, in that fantasy scenario you are correct. Fantasy it is, and fantasy it will remain. :p
 
In the meantime I really would love to see more discussion on whether or not devs really think draw call overhead is that big a deal with bindless, etc. on the horizon. That's a conversation we can have today. Personally I think we should keep pushing both directions (lower overhead *and* fancier submission) but somewhere in the marketing the whole question of whether or not 9x as many draw calls is really something fundamentally necessary to get a certain image or more just to ease porting because the consoles can do it has gotten glossed over.
You can do practically any image with a single draw call, but that creates overhead in other areas. ;)
I'd frame the question(s) somewhat wider, since a low-level API isn't just about draw calls:
How far away are developers from achieving their vision at the best performance possible with OGL/D3D, and why?
How much closer can developers get with Mantle?
 
I don't believe the Batman MSAA situation is comparable to Mantle.

Yes I know, but he was phishing for confirmation that Nvidia is the devil, and so will be AMD in this case. I definitively don't see that, there is one single responsible for a game, and that's the ISV. It might be the lead or the boss or whoever, but the decision which lead to the situation was his.
And that's the same with Mantle. The ISV makes the decision.

You can do practically any image with a single draw call, but that creates overhead in other areas. ;)
I'd frame the question(s) somewhat wider, since a low-level API isn't just about draw calls:
How far away are developers from achieving their vision at the best performance possible with OGL/D3D, and why?
How much closer can developers get with Mantle?

My hope is that it would be possible to compile the whole scenegraph traversal into GPU native form. You have to prepare your data-structures for it, but in the end, you indeed might have a single draw-call. All the data needed to render the scenegraph for a specific frame is always there in the graphics-memory. Between frames you can do a little bit of resource-management, swap out a few datas here and there because of LOD, occusion or animations, and then just let it run.
I don't believe you can currently put the/any GPU into a self-running loop, but if your CPU essencially only acts as a clock, it could be very efficient already. It's an interesting question - would it be more efficient to implement updates of GPU resident resources as delayed transactions which become live when the self-running GPU hits the present-call (this can be a simple r/w lock, although over PCIe), in which case the GPU would truly be one participant in a multi-processor setup, instead of just a co-processor which does nothing till being told.
 
Sounds like you want to independant processors instead of the current master/slave we have today.
(They could still sync of course, for data exchange and such)
 
My hope is that it would be possible to compile the whole scenegraph traversal into GPU native form.
It's mostly possible with draw indirect, etc. but you still don't want to do anything too much like a tree traversal on a GPU. Turns out they suck at scalar stuff :) But you can definitely do everything from "render-list onwards" or so on the GPU in GL with bindless and indirect drawing, and reasonably efficiently too.

I don't believe you can currently put the/any GPU into a self-running loop, but if your CPU essencially only acts as a clock, it could be very efficient already.
The GPU would need a lot more/better hardware for this sort of paradigm to work. In particular, much better pre-emption, interrupts, etc. These are things that would likely vastly undercut some of the efficiency of GPUs for high throughput. That's not to say I don't expect these features to improve over time, but I don't think the end goal of making the GPU "first class" to the OS in all the ways that the CPU is is necessarily the way to go. We already have a processor that is really good at that scalar stuff; there's no point in reinventing that in the command streamer on the GPU (as over time I expect more and more pressure for that to become "configurable/programmable"). To put it another way, there's no point in adding another CPU-like core on the front of the GPU pipeline unless it's still fairly special-purpose.
 
...

As an aside - and I'm pained to have to be the one to say this - we as enthusiasts really need to stop the arguing about different IHVs on PCs. At this point it's not AMD vs NVIDIA vs Intel or even PC vs console... the continued existence of high end gaming is really PC/consoles vs. mobile. The latter is just too big to be ignored by anyone, even the shops that really want to just put out AAA console games or high-end hardware. We're going to have to band together here folks and promote the continued use of high-end platforms (i.e. anything above tablet) together, in any of its forms. There is a real threat this time...

Yes, the mobile cell phone market (and to an enormously smaller extent, the portable tablet market) is quite large. But it's quite a sub-par to mediocre market all around when it comes to SOA gaming hardware and 3d games that take advantage of SOA hardware. For years we heard the lament that "PC gaming is dead" because of the growing popularity of consoles--never mind the fact that the PC market continued to grow as well. And now look: consoles have literally transformed into PCs. x86 PCs at that! The PC as platform has won, hands down. Non-x86 console gaming is what is really dead, in the PC vs console vernacular.

Mobile/tablet gaming is very, very different from PC gaming. Mobile hardware isn't capable of running most if not all games that run with ease on even a medium-powered PC. The other day Apple announced it had sold 170M iPads in three years--170M+ PCs were sold in the last six months. That's the problem with these mobile versus desktop arguments, most of the time any real perspective gets lost in the haze.

Really, where the mobile vs. desktop argument breaks down completely is in its assumption that static, desktop PCs that have no need to design around the paradigm of battery life are going the way of the dinosaur. It's a ridiculous assumption. Most people own a PC *and* a portable cell phone, and some even own a PC, a Tablet, and a cell phone. The PC remains and by a wide margin the best bang-for-buck proposition going; it is user-serviceable and upgradable, and it runs rings around the fastest ARM tablets. The x86 Windows PC also has a riches of applications and games unequaled by *any* mobile platform, supports an incredibly more diverse set of hardware (this is N/A for mobile because they are all sealed devices more or less), and a PC is open-ended, meaning it has a wide range of uses whereas mobile devices are very limited by way of comparison. And it's in the PC R&D environment that raw performance is pushed constantly by AMD, Intel and even nVidia (most of nVidia's business is *not* Tegra-related, at least 97% of it is PC-related, last time I checked.) The quest for ever increasing performance in smaller and more efficient packages is perpetual, and R&D in the PC space doesn't have to concern itself with conserving battery life whereas the central, driving focus of all mobile R&D is primarily concerned with sipping power, battery life, etc. That's only logical, is it not?

What you are calling "high-end" gaming simply isn't possible on any mobile device I can think of. Mobile gaming is all strictly tic-tac-toe, pac-man level gaming and so on, AFAIAC. There are a few older ports of some RPGs on mobile, but gosh, how "high level" is that on a 4"-8"-10" screen where graphical details are so tiny they often can't be seen, and where the raw performance is roughly equivalent to a SOA PC 10-15 years ago, depending on the mobile device you look at?...;) In 1987 even my Amiga screen was 14" and we all pined for larger monitors even then. I'll put it this way: did the advent of the portable TV abolish demand for giant, non-portable living-room TVs? Of course not! Unless the world moves to a nomad existence where no one has a "home" anymore and we all constantly move around like packs of roaming gypsies, home computers (PCs/x86 consoles) will stay in demand as surely as large-screen televisions for the living room. As long as people have homes that stay in one place...;)

The whole mobile vs. Everything story has been badly skewed by a press that sensationalizes the "new" while at the same time makes no effort to put it into any kind of reasonable context. The ubiquitous PC is very much like a bell that once rung cannot be unrung. It's not going anywhere, and neither is AAA-level gaming on PCs and x86 consoles: mobile will proceed under its own impetus and according to its own purposes, capabilities, and limitations, and they are decidedly different from those of "high-end" PC gaming market. IMO, of course.
 
Back
Top