Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

Discussion in 'Architecture and Products' started by Love_In_Rio, May 17, 2016.

  1. spworley

    Newcomer

    Joined:
    Apr 19, 2013
    Messages:
    146
    Likes Received:
    190
    Does Pascal's hardware texture compression reduce actual storage size of textures in RAM? The compression's main utility is to boost effective bandwidth, but if it also reduces RAM use then NVidia's superior compression rate may make its 3GB card perform better than expected in RAM limited situations. Sure, the compression may only average say 25% or so (guessing, here), but when you're running near the hard RAM limit, such a boost may be significant. We'll see soon when the benchmarks come out.
     
    pharma likes this.
  2. homerdog

    homerdog donator of the year
    Legend Veteran Subscriber

    Joined:
    Jul 25, 2008
    Messages:
    6,270
    Likes Received:
    1,038
    Location:
    still camping with a mauler
    You talking about the delta color compression? My understanding has always been that it only saves bandwidth and not storage space, because the card must allot enough RAM for the uncompressed data since it doesn't know beforehand if compression will even do anything. And I'm not even sure if data is stored in compressed format; it could be decompressed before storing.
     
  3. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,936
    Likes Received:
    2,272
    Location:
    Germany
    That's my understanding as well. There's no Texture-specific compression that I'm aware of besides the standardized (DX-) formats, which of course already shrink the space for textures. As to decompression, I think that does not matter for VRAM occupation, since they're not stored in a decompressed format there - you want to efficiently use transfer rate after all.

    I know some publications - more mainstream ones I guess? - refer to the newly marketed Delta-C-compression as texture compression, but AFAIK that's not correct.

    What might come in handy for the 3-GB-models is that Nvidia probably has some experience optimizing for low(er) memory footprint, aka smart-yet-aggressive eviction policies because of GTX 970 and GTX 780-SKUs.
     
  4. DavidGraham

    Veteran

    Joined:
    Dec 22, 2009
    Messages:
    3,279
    Likes Received:
    3,527
    I can testify to that on some level, I had an old mid tier computer, with a 3GB 660Ti, normally I wouldn't be able to play memory intensive games such as Shadow Of Mordor, AC Unity or Arkham Knight at the highest texture level, even @720p. The games would stutter and hitch like crazy. When I upgraded the PC to 16GB of RAM, that problem was gone, and I was able to play those games @720p, and 1080p (when feasible) with the highest texture resolution and graphics settings, and with no memory swap stuttering whatsoever. So in my experience the extra system RAM helped alleviate shortages in VRAM.
     
  5. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    Hardware texture compression does reduce storage space in RAM, however that ONLY applies to pre-compressed textures provided by the developer. Think of it like reading JPEG images directly. A portion is loaded in and then decompressed as needed. Cooked assets that get decompressed on the fly by texture units that took a significant amount of time to compress in the first place. ASTC for instance which is standardized among all recent hardware. So a dev could choose to make textures more lossy to target a lower memory footprint in additon to adjusting resolution. This is typically what the texture quality setting in a game would do. Link

    DCC on the other hand would apply to rendered objects like framebuffers. It won't shrink the footprint, but reduce bandwidth requirements. Transmission compression (transmitting differences) is a better description. In this case it isn't necessarily Nvidia's DCC that helps, but cache efficiency from the tiling. Reading/writing a minimal number of times is more efficient. This would be one of the areas devs frequently seem to run into issues as different levels of compression affect how readily the resource can be read. Another caveat is postprocessing, using true compute, generally doesn't have access to texture units to handle the de/compression.
     
  6. DeanoC

    DeanoC Trust me, I'm a renderer person!
    Veteran Subscriber

    Joined:
    Feb 6, 2003
    Messages:
    1,469
    Likes Received:
    185
    Location:
    Viking lands
    AFAIK the only desktop HW that currently supports any form of ASTC is Skylake. So for desktop *at the moment* its not very useful as you have to transcode to another compression format that is supported. Though this may change in future obviously.

    If the Desktop Pascals do support ASTC, that would be cool but I've not heard it mentioned...
     
    BRiT likes this.
  7. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    It seems there is software support where ASTC textures are decompressed into a supported format, but not necessarily used directly. Tegra appears to support ASTC in hardware. Polaris dropped ETC2 which I also assumed was to be replaced by ASTC. This is in addition to both AMD and Nvidia providing tools to compress to ASTC. Details are surprisingly difficult to find. I assumed it was added given the benefits, but appears I may have been mistaken.

    At the very least I'd have expected AMD to add support for Polaris or Vega just for the console refresh.
     
    #587 Anarchist4000, Aug 20, 2016
    Last edited: Aug 20, 2016
  8. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,541
    Likes Received:
    2,231
  9. I.S.T.

    Veteran

    Joined:
    Feb 21, 2004
    Messages:
    3,174
    Likes Received:
    389
    Why was ASTC dropped?
     
  10. Anarchist4000

    Veteran Regular

    Joined:
    May 8, 2004
    Messages:
    1,439
    Likes Received:
    359
    The issue isn't that it was dropped so much as not yet integrated into the design. Driver support was added so they can decode with software along with optional DX12 support, but hardware support appears lacking. Only the mobile parts (Tegra) and Skylake appear to have full support. Would be nice to see official confirmation on this though. There isn't a whole lot of documentation out there on it. It still seems surprising consoles would have missed it considering their lifecycle.
     
    I.S.T. likes this.
  11. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,541
    Likes Received:
    2,231
    silent_guy and Razor1 like this.
  12. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    hmm 3gb didn't hold it back as much as I thought it would.
     
  13. pharma

    Veteran Regular

    Joined:
    Mar 29, 2004
    Messages:
    3,541
    Likes Received:
    2,231
    Interesting to note that in the Guru3D 1060 3GB review they mentioned the following in the conclusion:
     
  14. I.S.T.

    Veteran

    Joined:
    Feb 21, 2004
    Messages:
    3,174
    Likes Received:
    389
    I must be thinking of a different compression standard, then. Thanks for the info. :)
     
  15. I.S.T.

    Veteran

    Joined:
    Feb 21, 2004
    Messages:
    3,174
    Likes Received:
    389
    I... that's unusual. Anyone know of any other tests or anecdotes repeating this?
     
  16. Silent_Buddha

    Legend

    Joined:
    Mar 13, 2007
    Messages:
    17,044
    Likes Received:
    6,339
    Ah yes, Nvidia and their wonderful drivers. :) I remember back a long time ago having to juggle drivers on Nvidia cards depending on what game I was running to either avoid bugs or avoid performance pitfalls. I'm hoping I don't run into the same situation with my 1070.

    Regards,
    SB
     
  17. CarstenS

    Veteran Subscriber

    Joined:
    May 31, 2002
    Messages:
    4,936
    Likes Received:
    2,272
    Location:
    Germany
    Sooner or later, everyone will experience that drivers are still not „finished“.
     
    pharma and Razor1 like this.
  18. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Probably even their driver team getting confused with the differences between 1060 3GB and 1060 6 GB :D

    Cheers
     
  19. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    lets not forget the different amount of shaders ;)
     
    pharma likes this.
  20. CSI PC

    Veteran Newcomer

    Joined:
    Sep 2, 2015
    Messages:
    2,050
    Likes Received:
    844
    Well custom Pascal seems to be holding up reasonably well in the AMD Duex Ex: Mankind Divided apart from the 1060.
    I prefer to look at custom comparison AIB rather than reference models, because well that is what most gamers would prefer to buy and more indicative what is achievable with optimal HW and profiles due to thermals-power settings-etc.
    The custom 1060 seems to be behind a custom 980 by around 6-10%, while the custom 480 is doing well being only around 5-15% behind Fury X depending upon resolution.
    http://www.pcgameshardware.de/Deus-.../Specials/Benchmarks-Test-DirectX-12-1204575/
    They have a nice list of diverse GPUs tested.

    Cheers
     
    Silent_Buddha likes this.
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...