Vince said:
I'm in a state of shock. So we're going to forget how much IP resuse and build-on to the established architectures there is in "New Architectures"?
What does that matter NOW?
Sometimes you are SO SILLY. Who CARES if x years down the road, Nvidia's "sea of functional units" works better than ATIs dedicated traditional pipelines or whatever other approach they may have developed for an upcoming iteration of products, if the chips we have NOW clearly shows that ATIs approach totally and completely KICKS Nvidia ASS all over the place??? Your NV3x series will never exceed R3xx series performance at pixel shading. Never!
You are spouting absolute, utter NONSENSE here. If you base your purchase today on how you predict an architecture will look like/perform in the future, you're crazy. Or at least half crazy, hehe.
Speaking of this.. what state is the R400 in? heh.
True answer is, nobody outside ATI knows for sure so why speculate about it? They haven't announced anything publically.
As for ATI's going full out FP-24, I can't see how it's a smart idea. Read the developer's that B3D interviewed and they all want control over the precision.
Not all. You could argue a majority wants it, but then again you hardly have a statistically representative selection in that article, so it would mean nothing. Those guys are mainly "enthusiast" coders aiming at "enthusiast" players, ask guys from places like EA or such, more mainstream codeshops and ask if they're really that interested in bothering with special-casing rendering paths in their engines for quirky architectures like NV3x.
Besides, why would you need/want control if there is no speed difference from one format to the next, it just complicates things. The only reason you'd want control over the pixel format is that going flat-out full precision on current Nvidia products makes 'em run dog-slow.
It's not as if pixel formats are completely analoguous with integer sizes used in a microprocessor, they're not and you should know that.
With PP in the API, why wouldn't you want total control so that you can be more effecient? Why use all FP-24 if FX-12 is acceptable?
Why not turn that silly argument on its head and ask, why go through the extra work and make another shader just for the technically inferior FX12 format if FP24 comes for free from a speed POV and doesn't require twice the work effort? You know that DX9 requires FP24 minimum. FX12 is not compliant with API specs.
Defending ATI's bad decision that yeilds them better preformance at the developers expense?
WHAT expense? You calling FP24 at full speed as compared to FP32 at half "a bad decision"? You're stark raving mad if you seriously believe that. Nvidia is wasting transistors on features that breaks DX9 compatibility, you shouldn't be the one talking about bad decisions.
I think Tim Sweeney's comments are best... support legacy and transition via PP then make the leap to full out IEEE-32.
Sweeney's never really been up there with the big ones, he's said and done too many stupid things to really be considered an authority. It's totally unreasonable to expect an engine within the foreseeable future to REQUIRE FP32 internal precision. Shaders so complex they'll give noticeable artefacting on FP24 hardware won't be used to any large extent for a long time. Certainly not until ATI has developed FP32-capable hardware. Sweeney's certainly smoking some heavy weed if he thinks the next unreal engine will look unacceptable on current R3xx FP24 hardware, yet remain playable on NV3x FP32 hardware...
A huge big LOL @ you and Tim for that one dude.
This is the best part. So, taking a not industry supported substandard and running it threw your entire pipeline is better than making a transitional part that can run the widely supported legacy standards quickly aswell as transition threw to FP via DX9+'s PP and then run IEEE-32 in the future is worse then having a part that could very well fail to run any new apps in the post-2004 timeframe when IEEE-32 is the norm?
What you talking about? Again you make no sense. ATI hardware supports full FP32 frame buffers, no software will fail to run on it. Internal pipes calculate with 24-bit precision yeah, but since that is within API specs that hardly matters. It is totally invisible to software anyway, it only sees the output, which is FP32.
You seriously think a NV3x chip is going to run your FP32-requiring "post-2004" software at a playable speed at anything other than a postage stamp sized screen resolution? You gotta be joking!
Also, there is no such thing as "DX9+". You need to stop reading those Nvidia PR pressreleases, heh heh.
Oh yes, makes sence to me.
So, what are you going to say when a game based on
Unreal Engine 3 just won't run on an R300?
I'd say the game's buggy, and what else is new? Sweeney-engined games always are. Unreal was a total disaster from a stability standpoint when it came out in 1998, I never ever played a game that crashed that much, same thing again with Unreal2. Worst p.o.s. in years (and what's up with those load times anyway? Did the game secretly convert my harddrive to a C64 tape deck?).
FP24 internal presicion is specced in the API. FP16 (or FX12 for that matter) AREN'T. Get over it pal, your team lost the match, stop bitching and whining.
*G*