bfg spec of the gffx

olivier

Newcomer
Controller: NVIDIA GeForce FX
• Bus type: AGP
• Memory: 128MB DDR-II
• Memory Bandwidth: 16 GB/second (32GB with compression)
• Core clock: 500MHz
• Memory clock: 500MHz (1000 DDR-II)• RAMDAC: 2 @ 400MHz each
• Connectors: VGA, VIVO, DVI-I
• 200 million triangles/sec


where is the 48 GB with compression and the triangles rate is now only 200 millions ??? :?:
 
updated....

Specifications
• Controller: NVIDIA GeForce FX
• Bus type: AGP
• Memory: 128MB DDR-II
• Memory Bandwidth: 16 GB/second
• Core clock: ~500MHz
• Memory clock: 500MHz (1000 DDR-II)• RAMDAC: 2 @ 400MHz each
• Connectors: VGA, VIVO, DVI-I
• 125 million transistors• 350 million triangles/sec

Minimum System Requirements NOTE
• Intel Pentium®III, AMD® Duron™ or Athon™ class processor or higher
• 128MB of RAM
• 350watt Power supply minimum
• An available 2.0 AGP slot
• Empty PCI slot adjacent to AGP slot
• Available 4-pin power connector from internal power supply
• CD-ROM Drive
• 10MB available hard disk space (50 MB for full installation)
• Windows® 95 OSR2, 98 or higher, ME, 2000, XP, NT4.0 with service pack 5 or 6, Linux OS
 
Very interesting. You would think that the card can push a poly per clock cycle like the Radeon 9700. Something does not add up at all.
 
The tri rate is limited by T&L and by tri setup speed. In the theoretical case where each additional tri requires 1 vertex, you can count vertex shader throughput and do MIN(vertex throughput, tri-setup rate).

A homogeneous transform is going to require 4 dot products, so peak vertex rate will be limited to how many dp4's you can do per cycle. R300 can do 4. It appears the NV30 can do 3. So at 300Mhz, R300 can do 300M peak. At 500Mhz, NV30 can do 500/4 * 3 = 375M. (not sure how the 325M is calculated).

Whether or not you care about the per-cycle efficiency depends on whether or not you care about vertex or pixel shader performance. It also depends on how you are going to ramp your architecture: more units vs clock speed. AMD vs Intel.
 
DemoCoder said:
Whether or not you care about the per-cycle efficiency depends on whether or not you care about vertex or pixel shader performance. It also depends on how you are going to ramp your architecture: more units vs clock speed. AMD vs Intel.

I read somewhere that the NV30 has only two vertex units: two "huge" ones -- unlike the four ["smallers" ones] that the R300 has. . .
 
Ostsol said:
I read somewhere that the NV30 has only two vertex units: two "huge" ones -- unlike the four ["smallers" ones] that the R300 has. . .

Well, the nVidia has stated that with the NV30, they don't have multiple vertex units, but instead one large "cluster." I think it will be very interesting to find out what this means in terms of performance. Will this "cluster" have higher performance with many lights than the single vertex units of the past? Will it perform better with point lights? Or is the motivation for this type of processor purely focused on transistor savings? I think this will be a very interesting question to attempt to answer in the next few months (Assuming nVidia won't just tell it to us...which would also be nice).
 
Chalnoth said:
(Assuming nVidia won't just tell it to us...which would also be nice).
Not that you should believe anything you are told by any company who wishes to sell products.
We'd still want to verify :)
 
The cluster approach yields superior performance if your shader contains a good mixture of scalar, 2, and 3 component ops. If your shader is dominated by dp4 ops, it will less efficient since those specialized 4-component units will be idle.
 
DemoCoder said:
The cluster approach yields superior performance if your shader contains a good mixture of scalar, 2, and 3 component ops. If your shader is dominated by dp4 ops, it will less efficient since those specialized 4-component units will be idle.

In the end, what wins out? The architectural flexibility of the cluster approach is much more elegent, and in theory should be better suited - but in the realistic world of a developer I often question.
 
Vince said:
In the end, what wins out? The architectural flexibility of the cluster approach is much more elegent, and in theory should be better suited - but in the realistic world of a developer I often question.

With the adoption of HLSLs the mix of machine instructions in a shader will become more of an issue for the compiler, less for the application developer.
 
First ATI releases the 9700 and recommends using a 300W PS unit, now Nvidia is saying the Min requirement is 350W! We are talking min here. What next? Voodoo Volts style power connectors?

Atleast Nvidia is playing it safe and stating the min requirement, which i am sure will save many users hours of headaches.
 
Fuz said:
Atleast Nvidia is playing it safe and stating the min requirement, which i am sure will save many users hours of headaches.

as long as they factor in a PSU upgrade whne the buy it. I guess most Fx buers would have 400W+ PSU's by now.
 
What i find most interesting is that BFG calls this product just the "GFFX" .... without any "Ultra" appendage.

Could this be an indication that we wont see a lower-clocked version any time soon? (since the lower version would be outperformed by a 9700Pro? :) )
 
2B-Maverick said:
Could this be an indication that we wont see a lower-clocked version any time soon? (since the lower version would be outperformed by a 9700Pro? :) )
Or an indication that we will see even higher clocked GFFX marked as 'ultra'.
ciao,
Marco
 
nAo said:
2B-Maverick said:
Could this be an indication that we wont see a lower-clocked version any time soon? (since the lower version would be outperformed by a 9700Pro? :) )
Or an indication that we will see even higher clocked GFFX marked as 'ultra'.
ciao,
Marco

hehe... the more... the better...

at least that would drive the price down on the lower model... i dont wanna spend more than 300 Euros on my next card... and the 9700 non-Pro just entered that region barely a week ago.
But my next purchase still is 2 to 4 months off... so time for NV to bring prices down and make a NVidiot out of a FanATIc :LOL:
 
Or an indication that we will see even higher clocked GFFX marked as 'ultra'.

Actually, some of the more recent rumors I've heard is that there won't be a higher, or a lower, clocked version. Only one version with 500 Mhz DDR-II, and "approximately" 500 Mhz core clock.

However, there will be one version with 256 MB memory (about $499 U.S. Retail) , and one with 128 MB ($399 U.S. Retail) . Again, just rumors.
 
Back
Top