Two thoughts on why 512MB of RAM
1) A better texture and shader memory management system, given ATi has said their next architecture will implement a virtual memory system similar to the way a operating system implements page memory to provide better performance by keeping in use code and data blocks (assuming nested code) rather than swap an entire program in and out of RAM. I remember ATi saying rather than load all textures for a scene we will focus on loading what's needed as or just before its needed. Bigger memory means simply more cache hits and fewer cache misses!
2) Indirectly speed. When you generally design an algorithm you want to run super fast you have a few variables to consider:
i) simplify the results - approximate the real function or reduce the precision of the data as low as it can acceptabley go
ii) assign multiprocessors so long as the process intercommunication needs are managable
iii) change your algorithm to trade memory for code complexity
This last one is the possibly interesting one. There was/is a field of parallel computing that looked at ways of speeding up algorithms as basic as an addition or a sort to quite complex problems by trading memory for algorithm complexity. If you have both alot of processors and alot of memory you can redesign some algorithms to consume more resources and perform alot quicker. Possibly this approach could be considered one day for 3d graphics? I am unsure if that sub-field of parallel processing study blossomed or died.
1) A better texture and shader memory management system, given ATi has said their next architecture will implement a virtual memory system similar to the way a operating system implements page memory to provide better performance by keeping in use code and data blocks (assuming nested code) rather than swap an entire program in and out of RAM. I remember ATi saying rather than load all textures for a scene we will focus on loading what's needed as or just before its needed. Bigger memory means simply more cache hits and fewer cache misses!
2) Indirectly speed. When you generally design an algorithm you want to run super fast you have a few variables to consider:
i) simplify the results - approximate the real function or reduce the precision of the data as low as it can acceptabley go
ii) assign multiprocessors so long as the process intercommunication needs are managable
iii) change your algorithm to trade memory for code complexity
This last one is the possibly interesting one. There was/is a field of parallel computing that looked at ways of speeding up algorithms as basic as an addition or a sort to quite complex problems by trading memory for algorithm complexity. If you have both alot of processors and alot of memory you can redesign some algorithms to consume more resources and perform alot quicker. Possibly this approach could be considered one day for 3d graphics? I am unsure if that sub-field of parallel processing study blossomed or died.