Hyp-X said:
IMHO, it's not the programming model, it's the old cards that we have to get rid of.
I couldn't agree more.. but I also think it comes down to performance vs. application as well.
Say you are going to have a fairly large outdoor scene with rolling, lush hills of grass. In order for the scene to not look flat/uninviting, you are going to want to "frost the peaks" of the grass, so to speak.. give them some variations in color depending on altitude and curve of the hills.
You can either-
a) use traditional DX6-style MT, with about 3 layers.
b) use a rather simple shader
The problem with b) is you have just removed all the Radeon 7500 and GF-MX owners from your target sales base.. but moreover, all the GF3+/8500+ owners may see reduced performance. It gets even deeper when you take different architectures into play. What works well in one pass may require multiple passes on another platform- so you may wind up with performance between the two different approaches yielding different results based on the platform you are developing on compared to your user base.
The shader solution would also likely require less schedule time- after all, the more scenes you have, the more texture artistry you need to create to fit a particular scene and make it look realistic. You can likely totally reuse simple shaders to be applied to "like" scenes as the given effect is the same. No need to produce another 15-18 new textures per scene.
So from the way I look at it- two things need to happen:
1) More "budget" lines of DX9+ cards need to be released to help phase out older hardware. ATI and NVIDIA are both solid in this committment so we are in good shape here.
2) These "budget" lines of cards need to really improve in shader performance so as to be applicable for developers. After all, a shader that runs fairly well on a 9800 Pro or 5900 may not even be playable on a GF 5200 or Radeon 9600.