nooneyouknow said:
The reality for DX9 is this:
I can't speek for all developers only for myself.
1 - Developers will be compiling their software, for late summer / xmas 2003 shipping titles.
Yep we have a title in said range.
DX9 support is a sure thing, I've already ported the engine to DX9 - took about half a day.
Vertex Shaders are much better in DX9 even if you use 1.1 only, so it's definetely worth it.
2 - Developers for the most part will NOT have PS 2.0 shaders. The reason why is that developers really do not know what to do that fully uses PS 1.4, so even more hard for them to even imagine what to do with 2.0. This makes sense because of the huge install base of ATI DX8.1 parts AND the growing install base of DX9 parts.
The one sure thing we'll have is a PS 2.0 based Shadow Buffer implementation.
That requires a rewrite of every pixel combiner case with PS 2.0
I agree that writing PS2.0 only effects has very low benefit on the user base - especially as it'll be an RTS game, not FPS.
3 - In saying the above, there WILL be cases where the developer will write a 2.0 Shader that does the exact same as the PS 1.4 version but I really do not see that. Now, if they can go beyond PS 1.4 functionality, then of course they will do PS 2.0.
Exactly the same functionality makes sense when PS2.0 allows higher precision or speed over the 1.4 implementation.
We are using PS1.1 shaders for things that can be done on DX7 hardware but the PS implementation is much faster.
This will continue on but our minimal requirement is still DX7 hardware so we cannot have too many things that are only possible with pixel shaders.
4 - All the above statements are made with the assumption of developers limited Shader Knowledge AND their increasingly shorter development cycles. The grim reality of the current state of the industry. This will improve, as far as knowledge wise, but not this year.
5 - If ATI and NVIDIA help write 2.0 Shaders, then we will see more REAL DX9 titles this upcoming year, and I really believe that will happen. it has to.
I think you completely missed the point this time!
Prototyping a shader requires a surprisingly short time. It involves a programmer (doing the shader) and an artist (doing some demonstration content). I've never seen this taking longer than a few days!
Proper implementation of the shader is of course a longer work, have to handle multiple cases, multiple hardware, have to cooperate with the rest of the engine, and there's a lot of optimization work.
But this is almost never the bottleneck! The workload impact on the artist is always much larger than the impact on programmers.
For example, we prototyped per-pixel lighting (with bump-mapping).
It was considered to be included on most objects.
But:
1. It multiplies the work an artist has to spend on creating an object.
2. It can only be justified if the result is so much better that it worths the extra work.
3. Worths actually mean that you can convince the investors that it will be a selling point so they invest multiple times the money for it.
4. Given that it's not going to happen - now that means that the effect will likely be used on a few objects only.
5. But it will likely that those few objects would look out of place.
6. The effort of implementation (programming) is likely not justified in this case.
7. So there goes the feature.
Note that it had nothing to do with:
- The programmer knowledge
- The amount of help ATI or nVidia provided
So this way doing fancy things is limited to special effects, as those has less strict requirements.
Just my 0.02 HUF