WaltC said:
Why do you need cube maps anyway?
GOOD question! It's the same thing I asked back in 3dfx days when nVidia introduced it...
The only answer I ever got was "Because nVidia says we do." Now that nVidia's dropping support, I guess we don't, right?
No, cube maps WERE needed when hardware was fixed function. Now that hardware is flexible enough to completely control texture coordinates and how lookup is performed, it is no longer needed as a "builtin" feature, the same way EMBM isn't needed.
NVidia's statement is that they optimized for 2D (which I would argue is the MOST useful).
Interesting that they (nVidia) didn't hesitate to introduce cube mapping as a 3D feature years ago. Ditto Fast Write support. I agree with you, though, that cubemapping was never much of a 3D feature that anyone has supported to any degree.
Cube mapping is very useful for doing things like per-pixel normalization. It was definately a needed feature, which is why most vendors implemented it and why it is in use today.
Maybe there's something basic here I've missed, but it seems as though for weeks now all I've been hearing is that the nv30 FP pipeline is "better" than the FP support in Radeon. Well, it looks like here is a clear instance of that not being the case--apparently ATI was able to build in the complexity that nVidia wasn't, and ATI is supporting cubemapping (whether it gets used much or not--just like nVidia did.) I think it stands on its own without any need to embellish it out of proportion.
Who's been saying that the NV30 pipeline is "better"? I have been non-stop arguing since the debut of the R300 that both of these cards are killers and that the intersection of their common features is awesome. I have also argued in the past that arbitrary swizzle can be emulated easily, and that the 160 instruction limitation of the R300's pixel shaders is "effectively infinite" for games. 1024 instruction pixel shaders would run far far too slow, and I think you will find that most shaders will be underneath 30 instructions.
If you read my posts from over one year ago I consistently said "both ATI and NVidia's next gen cards will go beyond DX9 features, and both of them will have enhanced features that the other doesn't have"
I'm getting annoyed at all the nitpicking going on here. To me, DirectX9 is such a monumental advancement in the state of the art that I don't really care about pixel or vertex shaders 3.0 right now. IMHO, it will take devs another 2-3 years to figure out all the tricks and stuff they can do with VS/PS2.0 and DX9 features.
Arguing over swizzling, instruction length limits, FP formats supported, etc at this point seem to be splitting hairs on gorilla who has a million of them. DX9 has so many damn new cool possibilities, lack of any one of the "beyond dx9 features" is basically irrelevent for consumers.