IHV-specific Enhancements in Games

Discussion in 'Politics & Ethics of Technology' started by OlegSH, Feb 26, 2013.

  1. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    Nice feature for AMD, always in the frame for third person games, will always drop the perf for NV, like HDAO and other gentlemen feature set of AMD, the real IQ improvement however is questionable
    I wonder if NV will reply with it's geometry shader hairs consider that radeons sucks on geometry shaders
     
  2. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,147
    Likes Received:
    570
    Location:
    France


    TressFX use DirectCompute, no ?

    So it should work on nVidia hardware too... ?
     
  3. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    Yes, it's CS based, and considering the description it implements the same code for OIT which were used in Mecha demo, the Mecha is pretty slow on NVidia, like HDAO and all other AMD effects and they are always in frame(unlike tesselation which is pretty rare), all those effect are back engendered to be slow with Nvidia cards
     
  4. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
    Yeah, but you'll see the same thing as Dirt:Showdown where a gtx680 can barely match a 7950.
     
  5. Rootax

    Veteran Newcomer

    Joined:
    Jan 2, 2006
    Messages:
    1,147
    Likes Received:
    570
    Location:
    France
    Even after some drivers revisions and optimisation time ?
     
  6. gkar1

    Regular

    Joined:
    Jul 20, 2002
    Messages:
    614
    Likes Received:
    7
  7. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    I wonder if it's possible to see a difference between OIT enabled/disabled, I bet OIT is necessary here just to drop performance significantly
     
  8. Brad Grenz

    Brad Grenz Philosopher & Poet
    Veteran

    Joined:
    Mar 3, 2005
    Messages:
    2,531
    Likes Received:
    2
    Location:
    Oregon
    Considering how long nVidia banged on about their compute advantage while it was mostly useless, they can hardly complain when they decided to sacrifice that performance right when it was becoming practical.
     
  9. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,796
    Likes Received:
    2,054
    Location:
    Germany
    Yes, even then. 7970 GE even beats Titan. :)
     
  10. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    Yup if Nvidia cards are going to struggle due to a weakness then they'll just have to improve performance much like AMD had to with tessellation.
     
  11. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,141
    Likes Received:
    1,830
    Location:
    Finland
    OlegSH, your attitude towards this is quite curious, considering it's not AMD of the 2 companies which has the history of screwing users of other brand over and over again.

    To my knowledge, only Forward+ lighting is something nVidia still hasn't been able to get running at proper speeds.
     
  12. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    Actually, AMD has the same history of screwing users of other brand over and over again, which is unnoticeable to AMD fans.
    GeForces are fast with CS in most cases(look at most of CS benchmarks), except of back engendered cases, that's a huge difference compared to always SLOW tessellation(with high tessellation levels) on AMD. And the problem that AMD usually uses the slowest implementations of some shaders not because of IQ, but due to them being slower on NVidia, that's compared to fastest shaders - FXAA, HBAO and etc, which are used even on consoles, same with tesselation, the algorithms and shaders are fast, but AMD just cannot handle high tessellation levels
     
  13. jimbo75

    Veteran

    Joined:
    Jan 17, 2010
    Messages:
    1,211
    Likes Received:
    0
    The problem with these high tessellation levels is that the miniscule IQ gain (some would say it is actually a loss as it looks less realistic in some cases) is nowhere near worth the performance loss on both AMD's or Nvidia's cards. However, AMD cards are generally impacted harder so it's a net gain for Nvidia to hobble their own cards with artificially high levels of tessellation.

    The difference in this case is clear - there is a very obvious IQ gain with TressFX. Whether or not that's worth whatever performance hit there is will be down to the player to decide.
     
  14. jlippo

    Veteran Regular

    Joined:
    Oct 7, 2004
    Messages:
    1,309
    Likes Received:
    405
    Location:
    Finland
    Haven't really seen a nvidia demo with Forward+, but all forward+ research done on nvidia hardware seems to work just fine..
     
  15. Alexko

    Veteran Subscriber

    Joined:
    Aug 31, 2009
    Messages:
    4,489
    Likes Received:
    907
    :?:

    Kepler has displayed pretty poor compute performance in a number of compute benchmarks, often falling behind even Fermi. Besides, Dirt Showdown came out just a couple a months after GK104 came out, which means there's no way AMD could have foreseen that it would suck at compute when they started cooperating with Codemasters, especially since the Foward Lighting technique used was first displayed in the Leo demo that came out even before GK104.

    And AMD never pushed for proprietary libraries (PhysX) with a sabotaged CPU codepath, never removed objects altogether or disabled AA from games when an AMD card was detected (Batman AA), never tessellated flat or non rendered surfaces (Crysis 2), sabotaging performance for both AMD and NVIDIA users (just to a larger extent for AMD users), etc.

    They just don't behave the same way when it comes to this kind of stuff.

    I wouldn't say that. Aliens vs. Predator made very good use of tessellation, for example. I don't remember any instance of tessellation making things look less realistic, except maybe the cobblestones in Unigine Heaven, but that was a demo meant to showcase tessellation, so it was understandably pushed to the extreme.
     
    #15 Alexko, Feb 26, 2013
    Last edited by a moderator: Feb 26, 2013
  16. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    8,141
    Likes Received:
    1,830
    Location:
    Finland
    Was referring to the Dirt Showdown case where advanced lighting is Forward+ and kills performance (relatively) on the GF's.

    Oleg,
    GTX680 for example loses to GTX580 in many compute cases, I wouldn't call that "plenty fast"

    I assume you do have some proof of "AMD using slowest versions because they're slower on nVidia" or back engineering them to be purposely slow?
     
  17. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    Tesselation in Crysis2 is a very obvious IQ gain, much more than TressFX, however OIT for hairs is the same as a high tessellation levels for surfaces, it's a barely noticeable, but it will drop the performance a lot, and unlike the tesselation, the TressFX will drop the performance all the time of Lara being in the Frame, i.e. 100% of time
     
  18. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,796
    Likes Received:
    2,054
    Location:
    Germany
    It's enough to play to one's own strengths, just as Nvidia did with overtessellation.
     
  19. OlegSH

    Regular Newcomer

    Joined:
    Jan 10, 2010
    Messages:
    360
    Likes Received:
    252
    Look at SDK samples for proof, HDAO is slower compared to HBAO, while the quality is near the same, AMD DOF and other shaders are very slow without any visible IQ improvements(if we could consider the screen space fakes as a improvement, look at AMD HK2207 demo, it contains all AMD screen space shader effects, it looks like past generation, the performance is very poor and it's much slower on NV)
     
  20. MfA

    MfA
    Legend

    Joined:
    Feb 6, 2002
    Messages:
    6,705
    Likes Received:
    458
    They might not be written to run efficiently on NVIDIA cards, but per pixel linked lists are IMO going to see a lot of use going forward.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...