GeForce FX at 64 GB/s Bandwidth???!!

its the chip's internal bandwidth figure.
Nvidia suggested that the internal bandwidth is actually 64 GB/s...

still, closest one I can come up with is:
( ( ( ( ( 512/8 )*500000000 )/1024 )/1024 )/1024 )*2= 59.604645 GB/s
and that would mean they have 512Bit internal DDR bus running at 500MHz.

well, it would not be first time when Inquirer proudly reports something that doesn't quite match....
 
Or it would be in best case scenarios where the 4:1 lossless colour compression works like a charm..

16GB/s*4=64GB/s

I wonder more about how they came up with the 48GB/s number..


With Regards
Kjetil
 
Nappe1 said:
its the chip's internal bandwidth figure.
Nvidia suggested that the internal bandwidth is actually 64 GB/s...

still, closest one I can come up with is:
( ( ( ( ( 512/8 )*500000000 )/1024 )/1024 )/1024 )*2= 59.604645 GB/s
and that would mean they have 512Bit internal DDR bus running at 500MHz.
Possibly. But a 512bit DDR (effective 1024bit) bus at 500MHz means plain 64GB/s for me ;)
 
I seem to remember reading on tech hardware that both the Radeon 9700 Pro and the NV30 strangely enough have the same 64 GB/sec THEORETICAL bandwidth under IDEAL conditions. So what - you need to ask what bandwidth will be available under normal conditions.

Most reviewers seem to think the NV30 with its 16GB of raw bandwidth will beat the R300 19.8GB/sec of raw bandwidth by around 30% BEFORE a clear analysis of LMA 3 is evident.

Why don't we just wait and see?
 
You know what, i'm starting to believe theinquirer. If they say it it has to be true. I mean, pretty much everything they said about the nv30 was true. They were the only ones to say it would have 128 bit bus, they said it would have ddr2 at 1 ghz, all that came true plus much more. so if they say it will have 64 gb, it will have 64 gb.
 
g__day said:
Most reviewers seem to think the NV30 with its 16GB of raw bandwidth will beat the R300 19.8GB/sec of raw bandwidth by around 30% BEFORE a clear analysis of LMA 3 is evident.

Why don't we just wait and see?

or is that because that's all NVIDIA is telling them, seeing it as how no one but NVIDIA actually has working boards? Hmmm 8)
 
Maybe...

8 pixel shaders generating 128bit output ( 16 bytes ) at 500MHz gives 64GBytes/second - Maybe coincedence...
 
I would tend to think Crazyace's formula is how NVIDIA would come up with such a number. Kind of like how they got the GeForce 256 to have an internal 256 bit bus, isn't it (4 pixel pipes, 2 textures per pipe, 32 bit textures)?
 
Joe DeFuria said:
You know what, i'm starting to believe theinquirer. If they say it it has to be true.

:eek:
LOL! :D
sorry, that actually was funny. ;)

sancheuz: are you sure that you can see the difference between wanting to believe and believing? ;)
 
I think the 512-bit number with parhelia is a reference to the memory bus (256 bit DDR).
 
Tagrineth said:
Randell said:
nope its (32bit colour+ 24 bit z+8 bit stencil) x 4 pipes

So how is Parhelia 512-bit?


Heh-Heh...the nVidia description of "256" has nothing to do with current conventions in naming things like this--it was entirely a nVidia invention designed primarily to foster the impression of a "256-bit internal bus" when in fact none existed. At least, that's what I think. This all takes me wa-a-a-a-ay back in the way-back machine....

nVidia marketers have been clever in the past. They never "lie" outright (usually.) What they do is mislead, and then sit back and wait for the person they are trying to manipulate to reach an erroneous conclusion, and then they don't bother to correct it...;)

We saw a smidgeon of that in Dave's interview with Geoff. Dave did not allow himself to be led. Anyway, Dave asked Geoff whether or not the FX still took only 8 samples for AF. Geoff did not say, "yes." Instead, he said:

"Eight trilinear filtered samples." (Working from memory here, so pardon me if I'm not exact on the wording.) I thought that was a very strange answer considering that Dave hadn't asked anything about Trilinear samples....;) Then it hit me: the hope was that Dave would say something like, "So you mean the ATI 9700 Pro does 16 bilinear samples?" (Because what else could Dave's question really concern since only the 9700 does > 8 AF samples?) If Dave had said this I think Geoff would just have shrugged, grunted, or winked--and let Dave merrily go on to repeat the error to the whole world. Dave's not that shallow--thank God!....;)

Seriously, I don't know how many times I've seen nVidia PR people work that same kind of trick on less knowledgeable people--and my stomach churned as the people fell for it and glibly repeated whatever fundamental error it was the nVidia PR person had led them into. It was amazing to watch them do this sort of thing when 3dfx was trying to tell everybody that AGP x bus speeds weren't a big deal for 3D performance--and some of the "pundits" were screeching about AGP x4 as though it was a critical feature. The beauty of it is that nVidia never directly repeats or states the errors--it's the people they deign to grant interviews who do all of that (except for Dave B....;)) All nVidia PR does is skillfully lead them into the errors--they do all the rest.

(Sorry if this is a bit strong for some--but I've honestly seen it many times. Perez was a master.)
 
Back
Top