When Tuesday does the G70 NDA expire?

Some quick observations from the hardspell review:

* RSX and G70 are more or less the same thing (in terms of alus, operations per clock, etc. )
* Lots of games seem to be cpu bound these days. Performance increases go from non-existent (hl2, ut2004) to pretty significant (Colin McRae 2005), but depend very much on the game and resolution.
* Antialiasing alpha textures seems to work pretty well, though I have seen no benchmarks. This could be the killer feature this generation, if performance is okay. Check out these screenshots:
This is from a 7800 GTX
This is from the X800

:oops:
 
Thx Demi

One more question 165.0648 Gflops! Is this for real? If so .. why the low 3DMark05 score? It's got 3x more Raw pixelshader ALU power for MAD Instructions with vector4 data than the GF6800 Ultra .. so why the bad score?

Bad Score = 7700

Also .. no official 3Dc support?? Or does that fall under the DirectX and S3TC Texture Compression.
 
The article seems to say G70 supports 64bit and 128bit, thru NV's typical register combining? Tho apparently you'd want to be very judicious in your 128bit use. :)
 
4xFP16 = 64bit, 4xFP32 = 128bit. It's always the same marketing stuff. Of course you can get much higher precision manually (just like a CPU can do "infinite" precision operations).

The AA might have a nice little surprise yet to show...
 
Err. . .AFR2?? Damn Babelfish for being just good enuf to frustrate the hell out of me! :LOL:
 
from the pdf
- Full 128-bit studio-quality floating point precision through the entire rendering pipeline with native hardware support for 32bpp, 64bpp, and 128bpp rendering modes
- 64-Bit Texture Filtering and Blending
- Full floating point support throughout entire pipeline
- Floating point filtering improves the quality of images in motion
- Floating point texturing drives new levels of clarity and image detail
- Floating point frame buffer blending gives detail to special effects like motion blur and explosions
 
16bit precision □according to (by 2 □entire □channels or 1 □16bit channel □□)
32bit precision □according to (by 4 □entire □channels/2 □16bit channel/1 □32bit channel □□)
64bit precision □according to (by 4 □16bit channel or 2 □32bit channel □□)
The 128bit precision □according to (□becomes by 4 □32bit channel)

Does that support your read, Xmas?
 
PatrickL said:
One thing come to mind after going thru the graphs: is the X800 XL really so good ?

Hey, no kidding. My first reaction was to be annoyed that the only ATI card they compared with was the XL. . .and as I went on found myself saying, "Hey, not bad!" :LOL:
 
Since there are 24 Pixel ALU's and 8 Vertex ALU's ... I take it they not amalgamated like before? Is this the reason why 3DMark05 doesn't work well with the GF 7800?
 
Where it's not CPU bound, we seem to be getting 40-50%, which is reasonable. Very excited about the overclocking results though - 500/1380 on that one, which is impressive.

And AA on alpha textures is 8)
 
Back
Top