ok everyone has the feeling that they arent telling us something. I think i know what it is.
nVidia is claiming an effective 48Gb/s bandwidth and everyone has been assuming that its because of their 4:1 colour compression. remember much about Mojo? It was going to be based on gigapixel's defferred renderer. with the ability to cull TO (not with) a depth complexity of one then:
uncullable info (shader progs, geometry, etc) = UI
bandwidth = 16
avg depth complexity = 4
ineffeciency = %20 (?)
((16 - (.2 * 16)) * 4) - UI = effective bandwidth
(16 - 3.2) * 4 = effective bandwidth
12.8 * 4 = effective bandwidth
51.2 - UI = effective bandwidth
48 = estimated effective bandwidth
NV30 is a defferred renderer!!!!!!!!!
and THAT is how they are able to make almost ludicrous performance claims! it all makes sence now!!!!
http://www.beyond3d.com/forum/viewtopic.php?t=3209Ascended Saiyan Wrote:
Quote:
One last interesting tid-bit, 3dfx fans that are still out there will appreciate, is a statement Dan Vivoli, NVIDIA's Vice President of Marketing, made in the "GeForce FX Story" video that was sent out to the media. He stated that the name GeForce FX was chosen for two reasons. First, was because they are trying to create effects (FX) that are much closer to what you'd see in cinema. Then he goes on to say, "This is the very first product where the combined efforts of what was 3dfx and is now NVIDIA came together to create this product, and the combined "mojo" of 3dfx and NVIDIA are what make the heart and soul of GeForce FX."
Heh,quite funny indeed,but according to the preview from Hot Hardware it features MOJO tech from 3dfx,whatever the hardware capabilities of that was suppose to be.
MokeC Wrote:
Vince wrote:
The GeForce FX's occlusion culling algorithm has the capability to cull objects with a depth complexity of 1. - nV news
Explain. I'm serious, Is that cull objects to a depth complexity of one? Because if an object has a DC = 1, How can you cull an obect if it's the only thing there? This wording is confusing me.
I was under the impression that individual objects in a scene could be associated with a depth complexity value, which is based on the order they are rendered.
For example if object A is occluded by object B, which is occluded by object C, the depth complexity of the scene would be 3. What I was visualizing in the preview is that the parts of A being occluded by B and C would be culled.
In this case, I had associated object A with a depth complexity of 1, B with 2, and C with 3.
nhttp://www.beyond3d.com/forum/viewtopic.php?t=3196&start=140Joe Defuria Wrote:
ACK! Let's see nVidia live up to this claim (emphasis added by me):
Quote:
What's more, the GeForce FX's innovative new architecture includes an advanced and completely transparent form of lossless depth Z-buffer and color compression technology. The result? Essentially all modes of antialiasing are available at all resolutions without any performance hit. Greatly improved image quality, with no drop in frame rate!
I guess the validity of that statement will depend on what the nVidia definition of "essentially all" means...
nVidia is claiming an effective 48Gb/s bandwidth and everyone has been assuming that its because of their 4:1 colour compression. remember much about Mojo? It was going to be based on gigapixel's defferred renderer. with the ability to cull TO (not with) a depth complexity of one then:
uncullable info (shader progs, geometry, etc) = UI
bandwidth = 16
avg depth complexity = 4
ineffeciency = %20 (?)
((16 - (.2 * 16)) * 4) - UI = effective bandwidth
(16 - 3.2) * 4 = effective bandwidth
12.8 * 4 = effective bandwidth
51.2 - UI = effective bandwidth
48 = estimated effective bandwidth
NV30 is a defferred renderer!!!!!!!!!
and THAT is how they are able to make almost ludicrous performance claims! it all makes sence now!!!!