So I don't think I've been explicit enough in my posts so perhaps there's some mix up of where my arguments are directed towards, not to mention a poor typo I'm noticing now. To begin with, I want to affirm that I agree that sometime in the X future there will be some form of hardware decompression on PC. But I want to caveat that all of my responses have been directed towards the quoted bit around console manufactures spending money to have better performance.Like I said in an earlier post, both console manufacturers could have chosen to do nothing but include fast SSDs and leave the decompression/check-in model as it has been on consoles and PC for decades. But they chose to spend money to have better performance.
On console:
It costs significantly more to do decompression off the GPU than to run the decompression through on the consoles custom IO controllers. As per Brit's notes, 1-2TF of power can be siphoned from compute to run decompression at a high rate. But the challenge with consoles is bandwidth contention and the fact that both (PS5/XSX) GPUs don't really have much compute power to spare. If they wanted the GPUs to do the decompression on console without dedicated decompression hardware, the GPU would need to both get slightly larger in compute and significantly larger in bandwidth because the CPU and GPU are already using it for game code and rendering, and now we are layering in decompression as a third item. The amount of additional bandwidth needed to accommodate on top of the losses due to bandwidth contention would significantly increase the cost PER console.
So as I was saying earlier, you noted that it takes time and money to make these decompression solutions; but as I noted earlier if we assume 100M to make the custom I/O silicon and a fraction of a dollar to place it into a console, the actual cost of that I/O solution is $1 per console sold if 100M consoles are sold. The cost to move bandwidth to 50% more than they are now would cost way more than $1 per console. So in my perspective of things, in order to support high speed texture fidelity on console the cheaper solution was what they built.
IMO, if they chose to brute force it like on PC, they would have given the consoles both more compute and bandwidth. That would result in larger wins because if the game is not leveraging an insane amount of I/O decompression on the GPU than those compute and bandwidth resources can be put towards graphical rendering.
So this is where I'm at odds with your underlined, the brute force method provides overall better performance at least from a flexible resource perspective. So let's circle back to your statement here:
Quite frankly they didn't because it costs too much. My highlights below. The flash drive route was discussed to save DRAM costs. If you really want to save DRAM costs, you can't use the GPU to decompress the assets off SSD because that will take more DRAM or more expensive DRAM. The move to Flash and custom I/O controllers IMO are about saving money while still getting the performance that they want.both console manufacturers could have chosen to do nothing but include fast SSDs and leave the decompression/check-in model as it has been on consoles and PC for decades
Last edited: