EasyRaider
Regular
Longer and math heavier shaders will reduce the need for bandwith, but I expect the move to FP16 back buffers will more or less cancel that in the short term.
The general consensus among the more educated people seems to be that we won't see a 512-bit bus for a long time. But what about 256-bit XDR? That would give >100 GB/s external bandwith... A nice surprise from NVidia perhaps?
On a related note, I am thinking that a tile-based deferred renderer would make more sense now than ever. Soon we will want to play games with FP16 back buffer and 4*AA or higher. On chip depth buffer, on chip FP16 blending, bandwith free AA, the bandwith savings are gigantic, and any improvements in overdraw efficiency or depth/stencil fillrate is just a bonus. Or is bandwith usage for textures considerably larger than for frame buffer? I don't think that's the case (assuming most textures are compressed).
The general consensus among the more educated people seems to be that we won't see a 512-bit bus for a long time. But what about 256-bit XDR? That would give >100 GB/s external bandwith... A nice surprise from NVidia perhaps?
On a related note, I am thinking that a tile-based deferred renderer would make more sense now than ever. Soon we will want to play games with FP16 back buffer and 4*AA or higher. On chip depth buffer, on chip FP16 blending, bandwith free AA, the bandwith savings are gigantic, and any improvements in overdraw efficiency or depth/stencil fillrate is just a bonus. Or is bandwith usage for textures considerably larger than for frame buffer? I don't think that's the case (assuming most textures are compressed).