You don't have to be an ass. I am just asking questions. Or is it that you don't know how how performance would be affected with uncompressed textures. It's okay if you don't know as I don't either. Which is why I originally said
Fine, apologies for being a bit of a dick but I don't know else to communicate you're just adding an additional roadblock to GPU's using textures effectively, this wouldn't affect the problem of vram. I just don't see how this solves that problem in any way.
The only situation where this would bring an advantage that I can see is in cutting out the decompression step, which primarily would benefit CPU usage. So maybe you could have the textures decompressed upon game install and saved to the GPU's SSD, where they wouldn't have to be decompressed by the CPU during gameplay (?). So maybe for something like TLOU, you wouldn't have these large CPU spikes when traversing between areas. Except in that case, now you've added an I/O bottleneck where these uncompressed textures have to be pulled from the SSD in their uncompressed state. It also begs the question that if you're fine with extending your install time of games so that they're writing 200+GB of completely uncompressed textures to disk, why would that be an easy ask for the end user if the SSD is one on the GPU and not on your motherboard? It's still storage space required which will be in competition with other games, just like any SSD.
Directstorage makes far more efficient use of SSD's in no small part due to bypassing a lot of the legacy cruft with the Windows filesystem, but a significant benefit of the GPU decompression in 1.1 (and in the PS5/Xbox) is twofold: It massively reduces CPU usage for the decompression step, but as well it massively increases the throughput of the SSD's since the
effective bandwidth from the SSD can be double/triple it's rated gb/s based on how efficiently the type of stored texture is compressed. If you store the texture uncompressed on the SSD you lose that benefit.
So remember how this whole discussion came about: vram limitations. So while Nvidia's markup of $100 for an extra 8GB is egregious (from most reports I've read an additional 8GB of GDDR6, especially standard GDDR the 4060ti uses and not GDDR6X, is $20-$30), you instead want to tackle it by GPU vendors installing an SSD mount (to a
heat source like a GPU card!), have the user purchase an additional nvme ssd and install it, and have game developers support this and install 200+GB of uncompressed textures to this SSD at every game install?
I think the advantages, as well as the odds of that happening, are basically nil. Even with Nvidia's markup, the cost of just adding 8, 12 or even 16GB of vram to cards would likely be more economical, but also more useful - 8-16 gigs of 200+GB/sec vram over ~5GB/sec SSD. We would have Directstorage 1.1 being used widely and 16GB vram as the entry level before SSD's on GPU's would be something developers would be targeting. And once you have GPU decompression, then you lose the point of having an SSD installed on the GPU, as pci-e has more than enough bandwidth to transport the textures to the GPU in their compressed state (and uncompressed too, as the fastest nvme bandwidth is a fraction of pcie).