AMD GPU14 Tech Day Event - Sept 25'th

Discussion in 'Architecture and Products' started by Dave Baumann, Sep 20, 2013.

  1. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,528
    Location:
    Pennsylvania
    Isn't that stating at 17000 it's 41fps?
     
  2. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    Caused by the overhead which mantle would eliminate?
     
  3. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    I think it'll be a while before a studio making a PC-only game with their own engine considered Mantle, precisely because of this.

    Which games out there fall precisely into this bracket? One of my favourites does: ArmA. If ever there was an engine that needed a complete re-think then that's it. Not going to happen, either.

    What other performance-sensitive games fall into this bracket?

    Which is any technically high-end "console-first" game.

    It's depressing what happened with one notable console-first game, Rage.
     
  4. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,708
    Likes Received:
    2,132
    Location:
    London
    I think this is a question of rendering-engine architecture. repi says that multi-threading of draw calls in D3D has been a total failure - performance hasn't improved. That seems to imply that to get past this problem requires a deeper change.

    On the consoles, that deeper change is about symbiosis: access to GPU memory, command buffers and GPU state is truly fine-grained, enabling fine-grained draw call usage.

    Constant buffers (from D3D10) are a nice example of a more fine-grained approach to rendering. Changing the constants supplied to a shader became minimal cost. As a side effect it took away draw calls, too.

    The headline might read "less draw calls", but it was a change in engine to use a simple feature relating to constants that was actually required.

    In other words, getting to "more efficient and more draw calls" requires fairly deep changes.
     
  5. Malo

    Malo Yak Mechanicum
    Legend Subscriber

    Joined:
    Feb 9, 2002
    Messages:
    8,929
    Likes Received:
    5,528
    Location:
    Pennsylvania
    Well I doubt it's black and white and I have no idea from a developer point of view whether they would prefer to use 20000 draw calls instead of 2000 but I'm not sure DICE and AMD would state up to 9x draw calls for the same performance if they didn't actually have test cases which showed that.
     
  6. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    November couldn't come any sooner
     
  7. Dooby

    Regular

    Joined:
    Jul 21, 2003
    Messages:
    478
    Likes Received:
    3
    So, using those numbers, we would expect that it should be able to do ~18000 @ ~100fps, rather than 17000 @ 41fps, on a supposed best case scenario?
     
  8. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,879
    Likes Received:
    5,330
    How did you arrive at that conclusion ?
     
  9. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,462
    Location:
    Finland
    Shouldn't it be ~36k draw calls @ ~100 FPS if it enables you to get 9 times more draw calls per second, and you could now do 4k draw calls @ 101 FPS?
     
  10. Priyadarshi

    Newcomer

    Joined:
    Sep 22, 2012
    Messages:
    57
    Likes Received:
    0
    Location:
    USA
    Interesting! So current drivers are not as 'parallelized' internally as you expect them to be. Seems like a big task to completely change the driver architecture for modern multi-core CPUs. I also read the 'Batch Batch Batch' presentation your slide mentions which shows that CPU can't keep up with GPU if you keep increasing the number of batches (or drawcalls).

    Mantle seems definitely a positive step to remove that bottleneck then. It reminds me of the 'Close to Metal' GPGPU API by AMD not so long ago. :smile:
     
  11. DieH@rd

    Legend

    Joined:
    Sep 20, 2006
    Messages:
    6,387
    Likes Received:
    2,411
    Gaffer translated very interesting Dutch interview with AMD rep:
    http://www.neogaf.com/forum/showpost.php?p=83960129&postcount=471




    :)
     
    #311 DieH@rd, Sep 28, 2013
    Last edited by a moderator: Sep 29, 2013
  12. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    I cant wait for those benchmarks. We'll finally be able to see how much performance PC's are leaving on the table thanks to DX. Really happy with AMD for this.
     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    I'm looking out for details on what they will do (or not do) about maintaining Mantle backwards compatibility.
    The abstractions kept a lot of design evolution from becoming too disruptive for software development, and that was one less factor holding back experimentation in GPU hardware design.

    For the sake of a thought experiment, what would have happened if the need to comply with low level details of earlier architectures happened earlier?
    What if the desire for ease of development across platforms lead to Mantle being introduced for the Xenos and R600 time frame?
     
  14. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    17,879
    Likes Received:
    5,330
    I thought shaders would still be programmed by hlsl ? thats what ive read ?
     
  15. ninelven

    Veteran

    Joined:
    Dec 27, 2002
    Messages:
    1,742
    Likes Received:
    152
    I doubt it would/will have a large impact either way. G80 was 7 years ago. I would guess AMD feels good enough about GCN (especially considering the console wins) to be content to iterate on it for several years.
     
  16. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,579
    Likes Received:
    4,799
    Location:
    Well within 3d
    Depending on how low-level Mantle is, GCN would have needed to maintain design continuity with a Vec4+1 architecture or a VLIW5 with memory system incompatible with x86.
    At least I didn't ask what would have happened if they started with R200 or R300.

    As nice as GCN is, there are still things that need improvement, and if enough low-level things become part of Mantle, future changes become defined by all the old things they don't conflict with.
    I'm hoping Mantle isn't so low level that it drops out of DirectX API abstractions and shoots past the level of HSA's virtual ISA, unless there is a clear demarcation between a virtual GPU and portions that are clearly implementation-specific.

    A lot of lower-level things can be done without getting caught up in specifics that hopefully don't persist in the next GCN installments.

    And finally, would anyone be interested in a "content" AMD? They tend to undershoot even when not treading water.
     
  17. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    9,235
    Likes Received:
    4,259
    Location:
    Guess...
    Yeah this is my main concern about this. As great as I think it is, if it locks AMD into GCN fot the next 8 years then that's very bad. NV can probably afford to stick with DX in that case and just rely on architecture advancements to drive its performance forwards.

    There's a clear balance that needs to be struck but mantle makes that one hell of a lot more interesting.
     
  18. manux

    Veteran

    Joined:
    Sep 7, 2002
    Messages:
    3,034
    Likes Received:
    2,276
    Location:
    Self Imposed Exhile
    It's not like AMD doesn't know what kind of architectures they are going to ship for next 3-5 years and be able to design mantle api accordingly. It's unlikely they would be stupid enough not to think about tomorrow.
     
  19. entity279

    Veteran Subscriber

    Joined:
    May 12, 2008
    Messages:
    1,332
    Likes Received:
    500
    Location:
    Romania
    So it's not as easy as writing Mantle on top of some intermediate nicely selected or developed ISA, is it? I've have no clue about graphics ISAs, but their abstraction power is somewhat bounded by data-flow inside the gpu or maybe other factors?
     
  20. lanek

    Veteran

    Joined:
    Mar 7, 2012
    Messages:
    2,469
    Likes Received:
    315
    Location:
    Switzerland
    I think like you, and i dont think its much a problem for AMD to add new architecture in Mantle, make evolve it, why maintain coherency and even easy port on multi-plateform betwen old and new architecture.. Its not like the actual road have take AMD ( and Nvidia ) with GPU ( and GPGPU ) will change completely tomorrow. Ofc performance will maybe not be the same and old GCN GPU's, will not benefit of all the the advance of the new + Mantle ( 2-3 etc ).. But it is not allready the case anyway with actual API and desktop GPU ?

    The cycle of developpement for GPU is long.... And next architecture on the work since a long time before been released to the public.. ( 4-5 years ).

    Will it not be easier to implement new features or even make evolve GCN1 based console when you have the advantage of a low level API access ? And Sony and MS have the possibility to upgrade the APU /GPU in 3 years and release a PS4 v2 and XB One v2 with better GPU ?.. They have allready do it by the past... ( without saying with process advance, they can got some real gain on TDP, performance ) .

    Mantle could evolve for include next architecture features, old architecture will not benefit of it, but this will benefit performance gain when you port them from a console GCN 1.0 to a PC GCN "3.0".

    We speak about Mantle, but even when watching next architecture of Nvidia, Maxwell, Volta, with the possibility they include ARM based processors in their GPU for some task, Nvidia will need to have something rather low level hardware access, if they want programmers to take advantage of it, depend what road they take..
     
    #320 lanek, Sep 29, 2013
    Last edited by a moderator: Sep 29, 2013
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...