AMD Mantle API [updating]

Discussion in 'Rendering Technology and APIs' started by MarkoIt, Sep 26, 2013.

  1. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
  2. Arwin

    Arwin Now Officially a Top 10 Poster
    Moderator Legend

    Joined:
    May 17, 2006
    Messages:
    18,095
    Likes Received:
    1,698
    Location:
    Maastricht, The Netherlands
    The star swarm benchmarks seem much more interesting, because those guys are taking advantage of the benefits much more efficiently, by not having a core engine that is designed around the limitations of DirectX in the first place. I suspect it would require a full next iteration of Frostbite before those games start to make the most of Mantle.
     
  3. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,112
    Likes Received:
    2,579
    Location:
    Germany
    Not only that, but they haven't done optimizations yet any sensible developer would do before shipping his game (i.e. guaranteeing basic playability, using a form of Anti-Aliasing and/or motion blur technique that does not increase the already extreme batch count five-fold.

    It's a techdemo, designed for a single purpose, nothing more (yet) and thus hardly comparable to BF4, which is a shipping game.
     
  4. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,482
    Likes Received:
    649
    Location:
    Finland
    Supersampling should work trough .INI tweak. (haven't had time to check it though.)
    Considering that they do not perform shading in screenspace, it could be decent alternative without MSAA.
     
    #1204 jlippo, Feb 5, 2014
    Last edited by a moderator: Feb 5, 2014
  5. CarstenS

    Legend Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    5,112
    Likes Received:
    2,579
    Location:
    Germany
    It works and indeed looks good, tried that yesterday. You can set Rendering Resolution and Display Resolution independently.
    But for apparent reasons they decided to rather enable motion blur (temporal AA) in this batch-intensive way for the Tech-Showcase that is Starswarm.
     
  6. snc

    snc
    Newcomer

    Joined:
    Mar 6, 2013
    Messages:
    213
    Likes Received:
    113
    That's happen when you test in cpu demanding scene/game(drivers are responsible for this, NVidia dx drivers better utilize many cores but sucks on 2 cores cpu).
     
  7. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    Sigh, more useless single player numbers. Once again the European (specially the German) hardware sites put all others to shame with their quality of reviews and testing.
     
  8. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    602
    Likes Received:
    22
    About the Star Swarm demo...someone posted here his GTX 780 scores wo motion blur...it was about 80fps...well my 290X wo motion DX is about 65fps.. and...90fps with Mantle...what does it says? They did not optimise as much for AMD DX and focused on AMD Mantle?
     
  9. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,402
    Likes Received:
    4,111
    Location:
    Well within 3d
    I see a slide that says it is forward compatible, and that it can work for today's modern GPUs.
    That isn't the direction needed or characterization used to describe an AMD VLIW GPU.

    I've already pointed out that a flexible design can be programmed to work with Mantle. The more capable it is of being configured and controlled by software, the more readily it can do so.
    If the necessary features are not exposable or flexible enough in the base architecture, it becomes a performance drag or a liability.


    Possibly that a GTX 780 on some kind of Mantle API would be even faster.
    Mantle removes a set of specific stumbling blocks, but it doesn't hide architectural differences.
    The 780 is a very good graphics architecture, and I think it's disingenuous to blame the API for everything in cross-architectural comparisons.

    At this point, the known areas for improvement that a beta version of the API provides are known.
    In my opinion, it goes to show that for all its inefficiency, the full real-world impact of the APIs an industry has spent countless hours and millions to billions of dollars to use effectively is overblown.
    Yes, drivers and APIs can fall down, but the people working on this haven't been so universally incompetent that they don't get the majority of peak most of the time.

    On a competent gaming platform, the one trick AMD bangs the drum over is overblown. The API and driver is a heavy burden on a CPU, but not enough to be decisive if you have a good one.
    The necessary black magic with driver development and the black box model is difficult to wrangle and can have negative effects, but not as much with a good driver or what is considered good practice in software development.

    I'm eager to see the ideas espoused by Mantle as a more universal initiative. Things like more consistent performance, options for more flexible feature utilization, and potentially a more regular method for CPU and GPU interaction are good.
    Hoping for BF4 quality in a graphics layer, for just a subset of AMD's share, with things like purposefully sub-optimal engines like Oxide being the bright future? Eh.
     
  10. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,280
    Likes Received:
    5,897
    bit-tech has been specially bad for the last couple of years, and yes that article is useless through and through.
    I'm still on hold for anand's real article on Mantle, though.
     
  11. Ryan Smith

    Regular

    Joined:
    Mar 26, 2010
    Messages:
    629
    Likes Received:
    1,131
    Location:
    PCIe x16_1
    It's going to be SP numbers, so it's best to set your expectations accordingly.
     
  12. DSC

    DSC
    Banned

    Joined:
    Jul 12, 2003
    Messages:
    689
    Likes Received:
    3
    That slide is from DICE's Johan Anderson(repi), it is NOT from AMD. repi is wrong about it being not tied to GCN, if it wasn't it would work on HD5000/6000.

    Why would Nvidia and Intel adopt a dead end, close proprietary API from AMD which doesn't work on HD5000/6000 and doesn't even work properly on GCN 1.0 hardware at this point of time and will disadvantage them greatly?

    Even worse, DICE and AMD has been crippling performance on Nvidia hardware. Both are shameless companies.

     
  13. 3dilettante

    Legend Alpha

    Joined:
    Sep 15, 2003
    Messages:
    8,402
    Likes Received:
    4,111
    Location:
    Well within 3d
    The optimal configuration for compute shaders could very well be different between the architectures, and GCN does put a very heavy emphasis on compute.
    Getting something satisfactory between vendors doesn't need to be simultaneous, and I don't think Nvidia was really suffering performance-wise.
    I'm not jumping to malicious intent as the first explanation.
     
  14. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,293
    Location:
    Helsinki, Finland
    Kepler and 290X are able to draw more than one million separate (animating) objects (using different mesh each) per frame at 60 fps using bog standard DirectX. What state changes do you actually need between your draw calls when you are rendering to g-buffers and you use virtual texturing and virtual geometry?

    For huge majority of the draw calls you don't need any state changes. Of course at some point of the frame you need to enable alpha blending (and switch between standard, clip and skinning shader twice per frame), but what other state changes you actually need?
     
  15. SimBy

    Regular Newcomer

    Joined:
    Jun 21, 2008
    Messages:
    563
    Likes Received:
    201
    :lol: not even worth commenting.
     
  16. ToTTenTranz

    Legend Veteran Subscriber

    Joined:
    Jul 7, 2008
    Messages:
    11,280
    Likes Received:
    5,897
    I'm okay with it, as long as you people test it with a sufficient number of CPUs of several price ranges (not just 1 Intel top-end + 1 AMD lower mid-end) and properly refer to frame latencies.
     
  17. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    No one outside of DICE has a reliable way of measuring multiplayer reproducibly. I have yet to see any test methodologies or numbers that I would trust even with a pretty large error bar. People are either ignorant or have an agenda if they claim they can reliably benchmark BF4 multiplayer.

    Pretty sure he was speaking more to cross-IHV portability than backwards compatibility there, but he can feel free to clarify.

    Yeah, single player benchmarks are not ideal, but they're less useless than any claim of repeatable multiplayer ones outside of DICE's own tests.

    There are clearly things in Mantle completely unrelated to VLIW (I don't get why folks are so fixated on that with the assumption that nothing else changes between GPU architectures) that can't be supported pre-GCN. Wasn't there even confirmed stuff earlier in this thread? Read back guys.
     
  18. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    I'm obviously on the same page here (see the other thread :)), but whether or not it's useful in your engine, it's still worth noting how much performance is left on the table for folks who want to write code in a more "traditional" CPU-fed way. Whether or not that remains common in the future is up for debate, but "free performance" is never bad really, nor is pursuing both potentially useful methods of rendering in the future.
     
  19. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    Why get hung up on that? Other sites went and played the game for a good number of minutes and showed their findings. I thought we were all trying to get away from pre-canned repeatable benchmarks which are known to be optimized by IHVs. What I want to see is a 30 minute session in a 40+ player populated map focusing on frametimes. Is that unreasonable?
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...