Does anyone think that ATI and/or NVidia's color compression is anything more complex than the obvious trick of storing a flag for each pixel that says whether or not all of the subsamples are identical or not?
To do 4:1 (best case) lossless compression that can be efficiently random accessed, I would bet that they don't even use something as trivial as run length encoding, but simply optimize multisample reads and writes by using a flag bit.
Does anyone have any information to the contrary? Does the compression actually work when multisampling FSAA is turned off?
To do 4:1 (best case) lossless compression that can be efficiently random accessed, I would bet that they don't even use something as trivial as run length encoding, but simply optimize multisample reads and writes by using a flag bit.
Does anyone have any information to the contrary? Does the compression actually work when multisampling FSAA is turned off?