Stimulated by a SA's post about IMR efficiency I was thinking on how NV30 could perform as fast as a R300 even adopting -just- a 128 bit data bus.
IMHO nvidia will employ some really smart and efficient compression scheme for frame and z/stencil buffers and an improved version of their memory controller (actually their compression engine sits into the memory controller)
Then I asked myself what may be the next move in the improving rendering efficiency war.
Maybe this was discussed in the past but I'm not sure at all.
So..what about the hw extracting informations from a frames sequence and re-using that info the next frame to improve performance?
I know this is not a novel idea...my question revolve around what kind of info the hw could extract and re-use.
What about keeping track (using internal performance counters) of some functional parameter the hw could change, frame by frame, according the nature of the data the GPU is processing?
Some more advanced technique could track screen areas and try to differentiate and predict work on each given area on an already rendered frames sequence. (it could be everything from memory access patterns, statistics on buffers compression to occlusion information (what about tracking occluders movements...)).
Speculative rendering?
I'd love some comment....
ciao,
Marco
IMHO nvidia will employ some really smart and efficient compression scheme for frame and z/stencil buffers and an improved version of their memory controller (actually their compression engine sits into the memory controller)
Then I asked myself what may be the next move in the improving rendering efficiency war.
Maybe this was discussed in the past but I'm not sure at all.
So..what about the hw extracting informations from a frames sequence and re-using that info the next frame to improve performance?
I know this is not a novel idea...my question revolve around what kind of info the hw could extract and re-use.
What about keeping track (using internal performance counters) of some functional parameter the hw could change, frame by frame, according the nature of the data the GPU is processing?
Some more advanced technique could track screen areas and try to differentiate and predict work on each given area on an already rendered frames sequence. (it could be everything from memory access patterns, statistics on buffers compression to occlusion information (what about tracking occluders movements...)).
Speculative rendering?
I'd love some comment....
ciao,
Marco