So how does this magical compression pipeline work?
Today:
We take our standard textures that are DXT1/BC1-5 which are block compressed so that the GPU can retrieve any block/tile they want on demand. It's then decompressed on the GPU and shot out for processing.
This works well with streaming virtual texturing.
Future?
We take our block compressed textures, which the GPU can randomly access any tile/portion of the texture from the HDD. We lossless compress the entire texture to save space... (kraken as I understand it is not a block compressor, BCPack I have no clue) then?
We have to send the entire texture to the GPU, decompress the entire texture then grab the tiles/blocks you want from it?
An ideal case would be that you could send portions of those blocks/tiles while compressed in kraken/bcpack, and then the GPU does it's normal decompress processing.
Today:
We take our standard textures that are DXT1/BC1-5 which are block compressed so that the GPU can retrieve any block/tile they want on demand. It's then decompressed on the GPU and shot out for processing.
This works well with streaming virtual texturing.
Future?
We take our block compressed textures, which the GPU can randomly access any tile/portion of the texture from the HDD. We lossless compress the entire texture to save space... (kraken as I understand it is not a block compressor, BCPack I have no clue) then?
We have to send the entire texture to the GPU, decompress the entire texture then grab the tiles/blocks you want from it?
An ideal case would be that you could send portions of those blocks/tiles while compressed in kraken/bcpack, and then the GPU does it's normal decompress processing.