Nvidia Pascal Reviews [1080XP, 1080ti, 1080, 1070ti, 1070, 1060, 1050, and 1030]

I am inclined to believe that with the caveat, the HUGE caveat, that it's true only as long as the 3 GiB vidmem suffice. And we know that even 4 GiB-SKUs are struggling in a few titles in 1080p already.
Does Pascal's hardware texture compression reduce actual storage size of textures in RAM? The compression's main utility is to boost effective bandwidth, but if it also reduces RAM use then NVidia's superior compression rate may make its 3GB card perform better than expected in RAM limited situations. Sure, the compression may only average say 25% or so (guessing, here), but when you're running near the hard RAM limit, such a boost may be significant. We'll see soon when the benchmarks come out.
 
Does Pascal's hardware texture compression reduce actual storage size of textures in RAM? The compression's main utility is to boost effective bandwidth, but if it also reduces RAM use then NVidia's superior compression rate may make its 3GB card perform better than expected in RAM limited situations. Sure, the compression may only average say 25% or so (guessing, here), but when you're running near the hard RAM limit, such a boost may be significant. We'll see soon when the benchmarks come out.
You talking about the delta color compression? My understanding has always been that it only saves bandwidth and not storage space, because the card must allot enough RAM for the uncompressed data since it doesn't know beforehand if compression will even do anything. And I'm not even sure if data is stored in compressed format; it could be decompressed before storing.
 
You talking about the delta color compression? My understanding has always been that it only saves bandwidth and not storage space, because the card must allot enough RAM for the uncompressed data since it doesn't know beforehand if compression will even do anything. And I'm not even sure if data is stored in compressed format; it could be decompressed before storing.
That's my understanding as well. There's no Texture-specific compression that I'm aware of besides the standardized (DX-) formats, which of course already shrink the space for textures. As to decompression, I think that does not matter for VRAM occupation, since they're not stored in a decompressed format there - you want to efficiently use transfer rate after all.

I know some publications - more mainstream ones I guess? - refer to the newly marketed Delta-C-compression as texture compression, but AFAIK that's not correct.

What might come in handy for the 3-GB-models is that Nvidia probably has some experience optimizing for low(er) memory footprint, aka smart-yet-aggressive eviction policies because of GTX 970 and GTX 780-SKUs.
 
I can testify to that on some level, I had an old mid tier computer, with a 3GB 660Ti, normally I wouldn't be able to play memory intensive games such as Shadow Of Mordor, AC Unity or Arkham Knight at the highest texture level, even @720p. The games would stutter and hitch like crazy. When I upgraded the PC to 16GB of RAM, that problem was gone, and I was able to play those games @720p, and 1080p (when feasible) with the highest texture resolution and graphics settings, and with no memory swap stuttering whatsoever. So in my experience the extra system RAM helped alleviate shortages in VRAM.
 
Does Pascal's hardware texture compression reduce actual storage size of textures in RAM? The compression's main utility is to boost effective bandwidth, but if it also reduces RAM use then NVidia's superior compression rate may make its 3GB card perform better than expected in RAM limited situations. Sure, the compression may only average say 25% or so (guessing, here), but when you're running near the hard RAM limit, such a boost may be significant. We'll see soon when the benchmarks come out.
Hardware texture compression does reduce storage space in RAM, however that ONLY applies to pre-compressed textures provided by the developer. Think of it like reading JPEG images directly. A portion is loaded in and then decompressed as needed. Cooked assets that get decompressed on the fly by texture units that took a significant amount of time to compress in the first place. ASTC for instance which is standardized among all recent hardware. So a dev could choose to make textures more lossy to target a lower memory footprint in additon to adjusting resolution. This is typically what the texture quality setting in a game would do. Link

DCC on the other hand would apply to rendered objects like framebuffers. It won't shrink the footprint, but reduce bandwidth requirements. Transmission compression (transmitting differences) is a better description. In this case it isn't necessarily Nvidia's DCC that helps, but cache efficiency from the tiling. Reading/writing a minimal number of times is more efficient. This would be one of the areas devs frequently seem to run into issues as different levels of compression affect how readily the resource can be read. Another caveat is postprocessing, using true compute, generally doesn't have access to texture units to handle the de/compression.
 
ASTC for instance which is standardized among all recent hardware.
AFAIK the only desktop HW that currently supports any form of ASTC is Skylake. So for desktop *at the moment* its not very useful as you have to transcode to another compression format that is supported. Though this may change in future obviously.

If the Desktop Pascals do support ASTC, that would be cool but I've not heard it mentioned...
 
AFAIK the only desktop HW that currently supports any form of ASTC is Skylake. So for desktop *at the moment* its not very useful as you have to transcode to another compression format that is supported. Though this may change in future obviously.

If the Desktop Pascals do support ASTC, that would be cool but I've not heard it mentioned...
It seems there is software support where ASTC textures are decompressed into a supported format, but not necessarily used directly. Tegra appears to support ASTC in hardware. Polaris dropped ETC2 which I also assumed was to be replaced by ASTC. This is in addition to both AMD and Nvidia providing tools to compress to ASTC. Details are surprisingly difficult to find. I assumed it was added given the benefits, but appears I may have been mistaken.

At the very least I'd have expected AMD to add support for Polaris or Vega just for the console refresh.
 
Last edited:
AFAIK the only desktop HW that currently supports any form of ASTC is Skylake. So for desktop *at the moment* its not very useful as you have to transcode to another compression format that is supported. Though this may change in future obviously.

If the Desktop Pascals do support ASTC, that would be cool but I've not heard it mentioned...

Why was ASTC dropped?
 
Why was ASTC dropped?
The issue isn't that it was dropped so much as not yet integrated into the design. Driver support was added so they can decode with software along with optional DX12 support, but hardware support appears lacking. Only the mobile parts (Tegra) and Skylake appear to have full support. Would be nice to see official confirmation on this though. There isn't a whole lot of documentation out there on it. It still seems surprising consoles would have missed it considering their lifecycle.
 
Interesting to note that in the Guru3D 1060 3GB review they mentioned the following in the conclusion:
For me it is rare to run into them, however the new 372.54 driver was absolute horse-crap for the 3GB GTX 1060. When we started benchmarking 3DMark ran fine, then when we hit titles like Doom and Tomb Raider, the performance just crippled at 1080 and 1440P to very weird perf results. For example Tomb Raider at 2560x1440 returned an average of 34 FPS with this driver and we started noticing similar behaviour in multiple games, surely that could not be related towards 3 GB less memory. There was one other driver we could try and worked on the 3GB edition, and older driver (368.64) did install on this card. Once we installed that one, the majority of problems vanished, the same Tomb Raider test now resulted into a 46 FPS average
 
The issue isn't that it was dropped so much as not yet integrated into the design. Driver support was added so they can decode with software along with optional DX12 support, but hardware support appears lacking. Only the mobile parts (Tegra) and Skylake appear to have full support. Would be nice to see official confirmation on this though. There isn't a whole lot of documentation out there on it. It still seems surprising consoles would have missed it considering their lifecycle.

I must be thinking of a different compression standard, then. Thanks for the info. :)
 
Interesting to note that in the Guru3D 1060 3GB review they mentioned the following in the conclusion:

Ah yes, Nvidia and their wonderful drivers. :) I remember back a long time ago having to juggle drivers on Nvidia cards depending on what game I was running to either avoid bugs or avoid performance pitfalls. I'm hoping I don't run into the same situation with my 1070.

Regards,
SB
 
Well custom Pascal seems to be holding up reasonably well in the AMD Duex Ex: Mankind Divided apart from the 1060.
I prefer to look at custom comparison AIB rather than reference models, because well that is what most gamers would prefer to buy and more indicative what is achievable with optimal HW and profiles due to thermals-power settings-etc.
The custom 1060 seems to be behind a custom 980 by around 6-10%, while the custom 480 is doing well being only around 5-15% behind Fury X depending upon resolution.
http://www.pcgameshardware.de/Deus-.../Specials/Benchmarks-Test-DirectX-12-1204575/
They have a nice list of diverse GPUs tested.

Cheers
 
Back
Top