No DX12 Software is Suitable for Benchmarking *spawn*

Discussion in 'Architecture and Products' started by trinibwoy, Jun 3, 2016.

  1. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
  2. Scott_Arm

    Legend

    Joined:
    Jun 16, 2004
    Messages:
    15,134
    Likes Received:
    7,679
    I’d guess if you’re not VRAM limited you won’t see much improvement except maybe from some bandwidth saving if any of the test is bandwidth limited.
     
    BRiT likes this.
  3. Lurkmass

    Regular

    Joined:
    Mar 3, 2020
    Messages:
    565
    Likes Received:
    711
  4. Dampf

    Regular

    Joined:
    Nov 21, 2020
    Messages:
    283
    Likes Received:
    474
    This test has nothing to do with VRAM, as it's using the Texture Space Shading part of Sampler Feedback, not the more interesting streaming technique!

    Performance varies greatly though depending how the base performance of the benchmark is. I got nearly a 20% more performance while my 2060 laptop was on battery.
     
    Scott_Arm and BRiT like this.
  5. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    I'd imagine that the more shading power a GPU has the lower the gains from using texture space shading will be.
     
  6. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    3090.

    It's not a benchmark of texture space shading vs regular shading. It's a benchmark of how much faster texture space shading is when using hardware sampler feedback. But yeah the whole point of texture space shading is to reduce the amount of shading in each frame so if thatisn't a bottleneck you won't see a significant benefit. Also if their "software solution" is bandwidth heavy that could also account for some of the difference.

    "In the first pass, we shade using texture space shading without sampler feedback, using a software solution developed in-house to determine the sampled texels along with a matching software sampler in the screen space sampling pass. The second passes uses the sampler feedback feature to determine the MIP levels and MIP regions that will be sampled for each frame, shading only them in texture space and sampling the shading map in screen space to produce the final image."
     
    Scott_Arm, BRiT, Jawed and 1 other person like this.
  7. Andrew Lauritzen

    Andrew Lauritzen Moderator
    Moderator Veteran

    Joined:
    May 21, 2004
    Messages:
    2,629
    Likes Received:
    1,227
    Location:
    British Columbia, Canada
    Sampler feedback - just like the original tiled resources - have always been in a weird space. Even the theoretical benefit is pretty marginal... it's basically cases where you have an anisotropic kernel walk across the edge of a tile boundary and you're unwilling to do borders. Obviously borders are usually completely reasonable for texture data.

    For texture-space shading you may want to expend a bit of effort to avoid shading border pixels but it's still kinda in the noise IMO. Without more details on what the "software solution" is and why they need a software *sampling* solution in the second pass (presumably because they are doing indirections per tap instead of borders?), I'm not sure if there's a meaningful way to interpret the results of the test to be honest.
     
    Jawed, DavidGraham, Lightman and 5 others like this.
  8. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,055
    Likes Received:
    3,112
    Location:
    New York
    It seems to me that texture space shading is about a decade too late. It attempts to cache and reuse shaded texels across multiple future frames. But this assumes that the lighting applied to those texels is constant over those frames. That seems less and less likely with modern games embracing multiple (many) dynamic direct and bounce light sources.
     
  9. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    Nv's 496.13 driver seem to add about 10% of performance for Ampere in AC Valhalla.
     
  10. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
  11. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    xpea likes this.
  12. Kaotik

    Kaotik Drunk Member
    Legend

    Joined:
    Apr 16, 2003
    Messages:
    10,244
    Likes Received:
    4,465
    Location:
    Finland
    Silent_Buddha likes this.
  13. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
  14. DegustatoR

    Veteran

    Joined:
    Mar 12, 2002
    Messages:
    3,240
    Likes Received:
    3,397
    UE4 is open sourced to AMD (and other IHVs), they can and do submit into it anything which they need to get better performance.
    It is in fact a much more fair engine for assessing cross-vendor performance than whatever "most developers" do for AMD based consoles.
     
    OlegSH likes this.
  15. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    Indeed, but that is an optimization issue, not an API issue, the developer should go ahead and optimize the DX11 path more and the issue should go away, many console games suffered this exact issue at launch, and the developers fixed it later with patches.
     
  16. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    No, the issue is of reduced DX12 fps plagued many engines even on AMD hardware, have a tour in this thread and see for yourself.
     
  17. techuse

    Veteran

    Joined:
    Feb 19, 2013
    Messages:
    1,424
    Likes Received:
    908
    It happens yes, but more than 50% of the time(most) DX12 is faster for AMD.
     
  18. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    This is simply not true, out of my memory, most of the games that featured both APIs worked slower on both AMD and NVIDIA in DX12.
     
  19. Putas

    Regular

    Joined:
    Nov 7, 2004
    Messages:
    737
    Likes Received:
    354
    When top CPUs were used.
     
  20. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,976
    Likes Received:
    5,213
    6 years ago, DX12 came with the promise of reducing CPU load, while also increasing scene complexity, we achieved little of that sadly. There are still games that suffer fps losses due to DX12 to this day. Even when the game has DXR, the developers can't get the DX12 to perform better than DX11 in many cases, I recently tried the game Deliver Us The Moon without DXR, and DX11 gave me 72fps in one scene, but DX12 gave me just 64fps on my 2080Ti, that's a 12% difference in fps I could have used elsewhere. Ghostrunner behaves the same, Control too, Resident Evil and several others.

    This is really bad, on the PC space, we sometimes lose fps due to Hyper Threading, CPU security patches, and now DirectX 12 as well. Those fps can really come in handy if we enable Ray Tracing, where every little bit helps.

    There are other games that have DX12 equal to DX11, or slightly better: like Metro Exodus, Division 2, Hitman 3 or Shadow of the Tomb Raider, and this needs to become the norm.

     
    xpea, pharma and Lightman like this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...