AMD Mantle API [updating]

Discussion in 'Rendering Technology and APIs' started by MarkoIt, Sep 26, 2013.

  1. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Obviously you have the numbers proving it. Or "my ass".

    I'm sure AMD will be really concerned about Iris Pro the day that games devs start to cater for that 0.00001% of users.

    Do you ever give actual numbers backed by sources ams? Ever?
     
    #481 jimbo75, Oct 23, 2013
    Last edited by a moderator: Oct 23, 2013
  2. ams

    ams
    Regular

    Joined:
    Jul 14, 2012
    Messages:
    914
    Likes Received:
    0
    Look, this isn't rocket science. For the nth time, what I said is that for pre-built systems that use Intel CPU's AND discrete graphics, NVIDIA GPU's are used the majority of the time (>90%). Even if the percentage is overestimated (I think the 90% number I mentioned earlier is valid for Intel-powered laptops with discrete graphics, but may not be valid for desktops), it is irrelevant because the actual percentage is so overwhelmingly high. What you quoted above is add-in-board market share with zero regards to CPU. The actual percentage of pre-built Intel CPU systems with NVIDIA discrete graphics inside is way higher than the 60+% seen above because NVIDIA's market share is disproportionately skewed towards Intel CPU's when used in pre-built systems. This is not a difficult concept to grasp: in pre-built systems that use Intel CPU's + discrete graphics, NVIDIA is by far the overwhelming favorite in this day and age.

    Let me give an example. Let's say that 75% of gaming PC's built with discrete graphics have Intel CPU's, and 25% have AMD CPU's (which is probably pretty realistic). If 80% of Intel CPU systems have NVIDIA graphics (the remainder with AMD graphics) , while 10% of AMD CPU systems have NVIDIA graphics (the remainder with AMD graphics), then NVIDIA's market share would be 62.5% (which is very close to the AIB market share listed above). So 90% is probably an overestimate, but 80% is pretty realistic.

    Users of Dell Optiplex rely almost exclusively on Intel integrated graphics. How many Optiplex do you think are actually sold with Intel CPU + AMD GPU? Probably not very many. And FWIW, Dell's XPS series is loaded with Intel CPU's + NVIDIA GPU's.
     
    #482 ams, Oct 23, 2013
    Last edited by a moderator: Oct 24, 2013
  3. BRiT

    BRiT Verified (╯°□°)╯
    Moderator Legend Alpha

    Joined:
    Feb 7, 2002
    Messages:
    15,573
    Likes Received:
    14,164
    Location:
    Cleveland
    ams, stop. Just stop.
     
  4. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    And the numbers and links proving these facts are where?
     
  5. firstminion

    Newcomer

    Joined:
    Aug 7, 2013
    Messages:
    217
    Likes Received:
    46
    What everybody is saying is: at most you are saying that NVIDIA "should" or "could", not that it "is" "by far the overwhelming favorite in this day and age".
     
  6. Blazkowicz

    Legend Veteran

    Joined:
    Dec 24, 2004
    Messages:
    5,607
    Likes Received:
    256
    I prefer using AMD CPU + nvidia graphics card (now, if only AMD had something worth upgrading from an Athlon II, LOL!) and lots of people are building a PC with Ivy Bridge or Haswell plus something from the Radeon R9 series.

    Why so serious about your made up idea.. And who gives a shit. It's not like OEMs or consumers care anyway, nor is AMD in a position of telling the OEMs "you'd better use our CPUs if you want to use our GPUs, or else.."
     
  7. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,439
    Likes Received:
    280
    I don't see the logic in this at all. The only reason for this to be true is if Intel and Nvidia give better pricing for choosing this combination and I doubt that happens.
     
  8. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    May be it is. But I can't see why. Can you explain the logic to me?
     
  9. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    You can mess around with the definition of "gamer market", but it's clear from the steam and unity hardware surveys that lots of folks are gaming on Intel GPUs. And you can't draw any sort of sensible line that includes AMD APUs (and TBH, a pile of low end discrete too) but not Intel ones.

    Frankly your whole "true gaming" line is just an arbitrary line so you can cherry-pick and try and blur the reality. Hell I don't use anything slower than a GTX 680 in any of my PCs currently, so why not just draw the line there since that means I probably can just reject everything the rest of you guys say, right? How many have Titans or 780s? :p

    I don't really see the point in this continued bickering. It's clear that a chunk of games will use Mantle even if only DICE adopts it and it doesn't really matter who else does TBH since Frostbite alone covers a good minimum bar. It remains to be seen how much Mantle itself gets pushed vs. using it as leverage to affect the portable APIs, but most of the rest of that conversation really needs to be put off a few weeks until we get some more information on how GCN-specific it actually is.

    In the meantime I really would love to see more discussion on whether or not devs really think draw call overhead is that big a deal with bindless, etc. on the horizon. That's a conversation we can have today. Personally I think we should keep pushing both directions (lower overhead *and* fancier submission) but somewhere in the marketing the whole question of whether or not 9x as many draw calls is really something fundamentally necessary to get a certain image or more just to ease porting because the consoles can do it has gotten glossed over.

    As an aside - and I'm pained to have to be the one to say this - we as enthusiasts really need to stop the arguing about different IHVs on PCs. At this point it's not AMD vs NVIDIA vs Intel or even PC vs console... the continued existence of high end gaming is really PC/consoles vs. mobile. The latter is just too big to be ignored by anyone, even the shops that really want to just put out AAA console games or high-end hardware. We're going to have to band together here folks and promote the continued use of high-end platforms (i.e. anything above tablet) together, in any of its forms. There is a real threat this time...
     
    #489 Andrew Lauritzen, Oct 24, 2013
    Last edited by a moderator: Oct 24, 2013
  10. rpg.314

    Veteran

    Joined:
    Jul 21, 2008
    Messages:
    4,298
    Likes Received:
    0
    Location:
    /
    Well said.
     
  11. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,515
    Likes Received:
    934
    Saying that something is obviously logical doesn't make it so. Especially when it's absurd.
     
  12. DieH@rd

    Legend Veteran

    Joined:
    Sep 20, 2006
    Messages:
    6,226
    Likes Received:
    2,178
    Enough about marketshares, pre-built systems and all that stuff...

    Much better subject is preformence increase that Mantle will bring. Its a shame that nobody wants to commit to some number, except few vague AMD mentions that it wont be singledigit percentage.
     
  13. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,038
    Likes Received:
    1,627
    Location:
    Maastricht, The Netherlands
    Apparently we still have to wait about 3 weeks for that. Certainly look forward to it though.
     
  14. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    There's a more in depth presentation who will occur in November, there will be conference, discussion, presentation about it, there will developpers there etc. I dont see why AMD should launch numbers in the wild before this conference.

    AMD developers, games developers and other developers have surely many contact and non official discussion todays about Mantle, performance and other aspect who touch it.
     
    #494 lanek, Oct 24, 2013
    Last edited by a moderator: Oct 24, 2013
  15. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,270
    Likes Received:
    1,038
    Location:
    still camping with a mauler
    Yes, in that fantasy scenario you are correct. Fantasy it is, and fantasy it will remain. :razz:
     
  16. Xmas

    Xmas Porous
    Veteran Subscriber

    Joined:
    Feb 6, 2002
    Messages:
    3,314
    Likes Received:
    140
    Location:
    On the path to wisdom
    You can do practically any image with a single draw call, but that creates overhead in other areas. ;)
    I'd frame the question(s) somewhat wider, since a low-level API isn't just about draw calls:
    How far away are developers from achieving their vision at the best performance possible with OGL/D3D, and why?
    How much closer can developers get with Mantle?
     
  17. Ethatron

    Regular Subscriber

    Joined:
    Jan 24, 2010
    Messages:
    868
    Likes Received:
    276
    Yes I know, but he was phishing for confirmation that Nvidia is the devil, and so will be AMD in this case. I definitively don't see that, there is one single responsible for a game, and that's the ISV. It might be the lead or the boss or whoever, but the decision which lead to the situation was his.
    And that's the same with Mantle. The ISV makes the decision.

    My hope is that it would be possible to compile the whole scenegraph traversal into GPU native form. You have to prepare your data-structures for it, but in the end, you indeed might have a single draw-call. All the data needed to render the scenegraph for a specific frame is always there in the graphics-memory. Between frames you can do a little bit of resource-management, swap out a few datas here and there because of LOD, occusion or animations, and then just let it run.
    I don't believe you can currently put the/any GPU into a self-running loop, but if your CPU essencially only acts as a clock, it could be very efficient already. It's an interesting question - would it be more efficient to implement updates of GPU resident resources as delayed transactions which become live when the self-running GPU hits the present-call (this can be a simple r/w lock, although over PCIe), in which case the GPU would truly be one participant in a multi-processor setup, instead of just a co-processor which does nothing till being told.
     
  18. Rodéric

    Rodéric a.k.a. Ingenu
    Moderator Veteran

    Joined:
    Feb 6, 2002
    Messages:
    4,030
    Likes Received:
    894
    Location:
    Planet Earth.
    Sounds like you want to independant processors instead of the current master/slave we have today.
    (They could still sync of course, for data exchange and such)
     
  19. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    It's mostly possible with draw indirect, etc. but you still don't want to do anything too much like a tree traversal on a GPU. Turns out they suck at scalar stuff :) But you can definitely do everything from "render-list onwards" or so on the GPU in GL with bindless and indirect drawing, and reasonably efficiently too.

    The GPU would need a lot more/better hardware for this sort of paradigm to work. In particular, much better pre-emption, interrupts, etc. These are things that would likely vastly undercut some of the efficiency of GPUs for high throughput. That's not to say I don't expect these features to improve over time, but I don't think the end goal of making the GPU "first class" to the OS in all the ways that the CPU is is necessarily the way to go. We already have a processor that is really good at that scalar stuff; there's no point in reinventing that in the command streamer on the GPU (as over time I expect more and more pressure for that to become "configurable/programmable"). To put it another way, there's no point in adding another CPU-like core on the front of the GPU pipeline unless it's still fairly special-purpose.
     
  20. WaltC

    Veteran

    Joined:
    Jul 22, 2002
    Messages:
    2,710
    Likes Received:
    8
    Location:
    BelleVue Sanatorium, Billary, NY. Patient privile
    Yes, the mobile cell phone market (and to an enormously smaller extent, the portable tablet market) is quite large. But it's quite a sub-par to mediocre market all around when it comes to SOA gaming hardware and 3d games that take advantage of SOA hardware. For years we heard the lament that "PC gaming is dead" because of the growing popularity of consoles--never mind the fact that the PC market continued to grow as well. And now look: consoles have literally transformed into PCs. x86 PCs at that! The PC as platform has won, hands down. Non-x86 console gaming is what is really dead, in the PC vs console vernacular.

    Mobile/tablet gaming is very, very different from PC gaming. Mobile hardware isn't capable of running most if not all games that run with ease on even a medium-powered PC. The other day Apple announced it had sold 170M iPads in three years--170M+ PCs were sold in the last six months. That's the problem with these mobile versus desktop arguments, most of the time any real perspective gets lost in the haze.

    Really, where the mobile vs. desktop argument breaks down completely is in its assumption that static, desktop PCs that have no need to design around the paradigm of battery life are going the way of the dinosaur. It's a ridiculous assumption. Most people own a PC *and* a portable cell phone, and some even own a PC, a Tablet, and a cell phone. The PC remains and by a wide margin the best bang-for-buck proposition going; it is user-serviceable and upgradable, and it runs rings around the fastest ARM tablets. The x86 Windows PC also has a riches of applications and games unequaled by *any* mobile platform, supports an incredibly more diverse set of hardware (this is N/A for mobile because they are all sealed devices more or less), and a PC is open-ended, meaning it has a wide range of uses whereas mobile devices are very limited by way of comparison. And it's in the PC R&D environment that raw performance is pushed constantly by AMD, Intel and even nVidia (most of nVidia's business is *not* Tegra-related, at least 97% of it is PC-related, last time I checked.) The quest for ever increasing performance in smaller and more efficient packages is perpetual, and R&D in the PC space doesn't have to concern itself with conserving battery life whereas the central, driving focus of all mobile R&D is primarily concerned with sipping power, battery life, etc. That's only logical, is it not?

    What you are calling "high-end" gaming simply isn't possible on any mobile device I can think of. Mobile gaming is all strictly tic-tac-toe, pac-man level gaming and so on, AFAIAC. There are a few older ports of some RPGs on mobile, but gosh, how "high level" is that on a 4"-8"-10" screen where graphical details are so tiny they often can't be seen, and where the raw performance is roughly equivalent to a SOA PC 10-15 years ago, depending on the mobile device you look at?...;) In 1987 even my Amiga screen was 14" and we all pined for larger monitors even then. I'll put it this way: did the advent of the portable TV abolish demand for giant, non-portable living-room TVs? Of course not! Unless the world moves to a nomad existence where no one has a "home" anymore and we all constantly move around like packs of roaming gypsies, home computers (PCs/x86 consoles) will stay in demand as surely as large-screen televisions for the living room. As long as people have homes that stay in one place...;)

    The whole mobile vs. Everything story has been badly skewed by a press that sensationalizes the "new" while at the same time makes no effort to put it into any kind of reasonable context. The ubiquitous PC is very much like a bell that once rung cannot be unrung. It's not going anywhere, and neither is AAA-level gaming on PCs and x86 consoles: mobile will proceed under its own impetus and according to its own purposes, capabilities, and limitations, and they are decidedly different from those of "high-end" PC gaming market. IMO, of course.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...