Pete said:Yep, because that's exactly what I believe happened. I believe ATi had engineers dedicated to too many other projects (Xbox 2, Gamecube 2) and was sitting on an already-class-leading architecture, so they stuck with SM2.0 for two reasons: one, b/c they didn't have enough resources to release a SM3.0 part on time, and two, b/c they wanted to cut the legs out from under SM3.0 by limiting their parts to 2.0, thus forcing the market to aim for the lowest common denominator, thus rendering SM3.0 mostly moot for this generation.
I'm not sure why my speculation is less valid than yours, though. You're sure ATi didn't see any benefit to SM3.0? Did they see a benefit to hiding trylinear, or to releasing the 8500 only to see it eclipsed by the GF4 in a matter of weeks? ATI is not all-powerful, and I believe their hand was somewhat forced, and they played it as best they could ATM (which is pretty well, considering they were in the position of power WRT mindshare). The fact is that we have gone from ATi being a tech generation ahead in features with the 8500, to nV being a tech generation ahead with the 6800. Whether things will play out the same way as 8500->9700 and GF4->GF FX is hard to say, particularly with the seemingly increasing process limitations. But I don't think you can say ATi had the power to implement SM3.0 and chose not to solely based on profit margins. They'd be setting themselves up to make even more money if they'd kept the tech lead for two generations in a row, rather than ceding it to nVidia after just one generation clearly on top.
Your just speculating. We don't know if they were actually low on manpower or not. A cost/benefit ratio seems much more likely at this stage. I also don't agree with nvidia being a tech generation ahead with the introduction of the 6800. As I said, SM3 is more of a small update to SM2 then anything else imo. We won't be seeing anything truely spectacular from it like we have with SM2 and will see with SM4.
Radar1200gs said:Why would you want to? Like most other people I've spent a lot of time money and effort getting as far away from low refresh rates as its possible to get. I have no intention of returning anytime soon.
While this may not apply to graphics intensive games like Far Cry it is a good option to have for most other games. Call of Duty for instance gets around 250 fps on an X800 which is more then enough to suit TAA's requirements, even with high refresh rates.
(almost) everything ATi brought to the table with R420 pixel shader and vertex shader wise and more (considerably more on the vertex side) when it comes to flow control and looping. Isn't it funny how nVidia picked up some of ATi's OpenGL extensions with no fuss, yet ATi can't bring themselves to just use SM2.0a, no they have to have 2.0b, just to be different from nVidia.
ATI did a better job of implementing it with realtime performance in mind. Nvidia's idea was just to support content creation, in which framerates don't matter. And maybe ATI didn't use SM2a because the hardware is incapable of supporting all the features?
FP16 is good enough for George Lucas and ILM. We still aren't anywhere near cinema quality games yet. And the consumer certainly hasn't benefitted from high full precsion requirements in DX9, in fact this requirement has arguably delayed the massive uptake of DX9 featured games by 2 years.
Better precision at playable rates did not delay DX9 uptake, you can blame nvidia and it's NV3x for that.
Don't worry, ATi will be seeing the benefits of SM3.0 all too clearly real soon now...
Yeah, you keep telling yourself that.