NVIDIA Game Works, good or bad?

Discussion in 'Graphics and Semiconductor Industry' started by Kaotik, Dec 31, 2013.

Tags:
  1. NThibieroz

    Newcomer

    Joined:
    Jun 8, 2013
    Messages:
    31
    Likes Received:
    8
    This comment about developers' wariness of using the Per-Pixel Linked List algorithm is unfounded. While TressFX indeed has theoretical unbounded memory requirements it can be controlled in game situations by ensuring the initial allocation is large enough for minimum camera distance and other tricks. On modern GPUs (or even consoles) with large amounts of memory allocating a decent amount (e.g. a couple of hundred MBs) is a reasonable investment for those devs who want the highest hair effect quality in their game. Nonetheless AMD also presented two alternatives for memory-constrained situations: tiled mode and mutex (although those tend to have a performance impact compared to the default solution).

    I wouldn't be surprised if a PixelSync variant would run faster on Intel hardware but this is fairly moot point since only Intel supports PixelSync at the moment. Adaptive OIT is also a different algorithm which makes the comparison difficult (still relevant though).
    With TressFX 2 we are now in the sub-ms cost for a model at medium-range 1080p resolution on an R280X.
     
  2. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Are you seeing interest in TressFX from developers? So far Tomb Raider is the only game that supports it, as far as I'm aware.
     
  3. NThibieroz

    Newcomer

    Joined:
    Jun 8, 2013
    Messages:
    31
    Likes Received:
    8
    Definitely. High-quality hair/fur simulation and rendering is a very active area that's currently being evaluated by developers. I would say that it is in the top 3 areas of active research along with Physically Based Rendering and Global Illumination.
    I can only mention the games that have publicly announced support for TressFX: Tomb Raider of course, plus Lichdom and Star Citizen.
     
  4. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    Thanks. I didn't know about Star Citizen.
     
  5. imaxx

    Newcomer

    Joined:
    Mar 9, 2012
    Messages:
    131
    Likes Received:
    1
    Location:
    cracks
    Just a question: you do have virtual memory manager in GPU, as far as I understand, in all GCN-class cards.
    What prevents you to just have your CPU interrupt/message/whatever to schedule & send the missing page to the GPU? You would not even need to block hundreds on Mb, or at least block the maximum reasonable for typical usage.
     
  6. RecessionCone

    Regular Subscriber

    Joined:
    Feb 27, 2010
    Messages:
    499
    Likes Received:
    177

    OIT is about much more than hair. Perhaps the instability issues can be worked around for TressFX, but all the caveats and tricks make the linked-list algorithm for OIT definitely something to be wary of for the general case.

    Pixelsync makes more sense - it's simpler, more flexible, and higher performance. I think we will see more of it in the future.
     
  7. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    Even if the hardware supports haulting on a page fault like that (it may or may not, I don't know), it's likely to be reeaaaallly slow. Definitely not something you want to be doing in the middle of a frame.

    Also as with most things in games, you really want to handle the worst case. If there's a case where you're going to need a huge amount of data, might as well allocate it up front. As Nick mentioned, it usually makes more sense to just allocate a big buffer and then constrain the game/view in ways that avoid the really bad situations if possible (i.e. hair or other fairly localized effects).
     
  8. Davros

    Legend

    Joined:
    Jun 7, 2004
    Messages:
    14,891
    Likes Received:
    2,307
    Dont those 2 quotes contradict ?

    ps:
    I blame codemasters for calling rhe exe with the features GRIDAutosport_avx.exe
     
  9. imaxx

    Newcomer

    Joined:
    Mar 9, 2012
    Messages:
    131
    Likes Received:
    1
    Location:
    cracks
    why would they? One is a function of pixels, the other not.

    well, "virtual memory" should support something like that, at least, I suppose.
    For being slow, I do not think it would be much slower than PRT, no? As long as TressFX is not synchronous in the pipeline (aka its in the compute pipe) should be ok, I guess. Ah well, just speculating, didnt check yet the blogged sources pointed before.
     
  10. Andrew Lauritzen

    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,526
    Likes Received:
    454
    Location:
    British Columbia, Canada
    PRT/tiled resources does not allocated pages on the fly - all the allocating/mapping is still done from the CPU between frames, etc. Hitting real page faults or otherwise stopping GPU execution to wait on OS/driver handling/remapping and so on is the larger performance concern.

    It's worth noting that what current GPUs call "virtual memory" is not exactly the same thing as CPU virtual memory as it is typically defined and managed in an OS. The broad strokes in terms of there being hardware page tables that map virtual -> physical are similar but a lot of the details are different.

    The part that does the OIT stuff is standard 3D pipe (i.e. rendering the hair primitives). Only the physics parts are done in compute and even there I'd be concerned about stalling on long-latency stuff like CPU handling of page faults.

    @Davros: I'm not sure what you mean... you may have to clarify your question. And regarding Codemasters, yeah that is confusing :) I guess they are assuming that anything that supports pixel synchronization right now also supports AVX which, while true, are unrelated features.
     
  11. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,983
    Likes Received:
    1,496
    its funny but at pax east I sat through a NVidia panel (and won a mouse) and they didn't talk about it once , they talked a lot about playing on that little handheld though.

    But that's what i'm talking about , compete with those type of features don't just hinder performance for everyone
     
  12. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    2,773
    Likes Received:
    2,560
    That's irrelevant, it's in a beta stage for a reason, doesn't support all resolutions, inconsistent bit rates or frames, only works with some games, has quality issues in others, no desktop recording option/fps counter .. etc, you can read about it all in here :
    http://www.anandtech.com/show/8224/hands-on-with-amds-gaming-evolved-client-game-dvr

    It's available since Oct 2013, they talked about it a lot in the past.
     
  13. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,798
    Likes Received:
    2,056
    Location:
    Germany
  14. PeterT

    Regular

    Joined:
    May 14, 2002
    Messages:
    702
    Likes Received:
    14
    Location:
    Austria
    Thank you for this post. I've been a bit annoyed with the whole AMD marketing spiel over the past ~year completely rewriting the meaning of "open".
     
  15. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    I know I'm late but wanted to say that was an excellent follow-up article at Extremetech. Mantle and Gameworks tackle very different problems and are so different in scope they probably don't belong in the same conversation.

    Given past discussions on PhysX I don't quite get why folks are angry that GW runs on all DX11 hardware. Isn't that a good thing? I can think of a few alternative scenarios but their merit is questionable.

    1) IHVs stay out of the middleware game completely and let developers fend for themselves. However, that in itself would not guarantee "fairness" or performance parity as developers and middleware providers can also favor one architecture over another. It also doesn't guarantee that developers would include effects of similar quality on their own.

    2) nVidia should continue investing millions of dollars in middleware and give it away for free, source code included. That's silly for obvious reasons and there's no precedent for it. As someone mentioned TressFX may be comparable to HairWorks but AMD has nothing on the scale of the entire GameWorks library.

    As it stands today there's no proof that GameWorks is anything but good for consumers of all hardware. It frees developer resources to work on things unique to their games instead of reinventing the wheel for commodity effects. In the absence of such proof all the negative spin is just scare tactics.

    Now if we see a pattern of GW titles tanking on AMD hardware then there will be something to shout about...
     
  16. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,496
    Likes Received:
    910
    There are a lot of companies that contribute significantly to various kinds of open source software. Developing good software that is related to yours or your hardware can be beneficial even if you share it openly.
     
  17. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    10,430
    Likes Received:
    433
    Location:
    New York
    Yes of course. However GW is a full featured product with included tooling and support and millions of dollars in R&D. It's not just a few lines of code on GitHub :)
     
  18. 3dcgi

    Veteran Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    2,435
    Likes Received:
    263
    I didn't go back in the thread long enough to find Andrew's quote so I maybe be taking this out of context, but your response is ironic in it's timing considering AMD recently opened up Mantle to Khronos. I agree the term open has been abused a bit recently though as Mantle wasn't and technically isn't open. A new Khronos API potentially based on Mantle will be open though.
     
  19. PeterT

    Regular

    Joined:
    May 14, 2002
    Messages:
    702
    Likes Received:
    14
    Location:
    Austria
    Sure, and I have nothing against a new Khronos API being described as "open". Because that's what it is. However, I found and continue to find the Mantle marketing as "open" obnoxious, because it is both highly effective (looking at general internet discourse on the topic) and highly misleading - I see nothing ironic about that.
     
  20. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,183
    Likes Received:
    1,840
    Location:
    Finland
    How is it misleading, when they've made clear the "open"-part isn't happening before the API is ready? Sure, it's 1 company controlling the development of the API and not a consortium like Khronos, but still (and others, as far as I've understood it, can create extensiosn like AMD now has GCN extensions to Mantle)
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...