Mintmaster
Veteran
One thing I find interesting is that we seem to be reaching a feature saturation point in 3D graphics. ATI went two generations on the R300 architecture, NVidia went two with NV4x, and I could easily see 3 or 4 with DX10 hardware. Software has a lot of room for improvement with current hardware, let alone the future.
Could longer GPU design cycles be more feasible? The insanely parallel nature of graphics makes it quite amenable to copy and paste for the vast majority of the die. Get one scalable unit right, and you're good for a long time. I think 2-3 years could be enough to tweak the logic for a shader array to run at 3 GHz+, especially with full access to a fab of AMD's calibre.
The possibilities...
Could longer GPU design cycles be more feasible? The insanely parallel nature of graphics makes it quite amenable to copy and paste for the vast majority of the die. Get one scalable unit right, and you're good for a long time. I think 2-3 years could be enough to tweak the logic for a shader array to run at 3 GHz+, especially with full access to a fab of AMD's calibre.
The possibilities...