my GOD i think i figured something about NV30....

Sage

13 short of a dozen
Regular
ok everyone has the feeling that they arent telling us something. I think i know what it is.
Ascended Saiyan Wrote:
Quote:
One last interesting tid-bit, 3dfx fans that are still out there will appreciate, is a statement Dan Vivoli, NVIDIA's Vice President of Marketing, made in the "GeForce FX Story" video that was sent out to the media. He stated that the name GeForce FX was chosen for two reasons. First, was because they are trying to create effects (FX) that are much closer to what you'd see in cinema. Then he goes on to say, "This is the very first product where the combined efforts of what was 3dfx and is now NVIDIA came together to create this product, and the combined "mojo" of 3dfx and NVIDIA are what make the heart and soul of GeForce FX."


Heh,quite funny indeed,but according to the preview from Hot Hardware it features MOJO tech from 3dfx,whatever the hardware capabilities of that was suppose to be.
http://www.beyond3d.com/forum/viewtopic.php?t=3209
MokeC Wrote:
Vince wrote:
The GeForce FX's occlusion culling algorithm has the capability to cull objects with a depth complexity of 1. - nV news

Explain. I'm serious, Is that cull objects to a depth complexity of one? Because if an object has a DC = 1, How can you cull an obect if it's the only thing there? This wording is confusing me.


I was under the impression that individual objects in a scene could be associated with a depth complexity value, which is based on the order they are rendered.

For example if object A is occluded by object B, which is occluded by object C, the depth complexity of the scene would be 3. What I was visualizing in the preview is that the parts of A being occluded by B and C would be culled.

In this case, I had associated object A with a depth complexity of 1, B with 2, and C with 3.
Joe Defuria Wrote:
ACK! Let's see nVidia live up to this claim (emphasis added by me):

Quote:
What's more, the GeForce FX's innovative new architecture includes an advanced and completely transparent form of lossless depth Z-buffer and color compression technology. The result? Essentially all modes of antialiasing are available at all resolutions without any performance hit. Greatly improved image quality, with no drop in frame rate!


I guess the validity of that statement will depend on what the nVidia definition of "essentially all" means...
nhttp://www.beyond3d.com/forum/viewtopic.php?t=3196&start=140

nVidia is claiming an effective 48Gb/s bandwidth and everyone has been assuming that its because of their 4:1 colour compression. remember much about Mojo? It was going to be based on gigapixel's defferred renderer. with the ability to cull TO (not with) a depth complexity of one then:

uncullable info (shader progs, geometry, etc) = UI
bandwidth = 16
avg depth complexity = 4
ineffeciency = %20 (?)

((16 - (.2 * 16)) * 4) - UI = effective bandwidth
(16 - 3.2) * 4 = effective bandwidth
12.8 * 4 = effective bandwidth
51.2 - UI = effective bandwidth
48 = estimated effective bandwidth


NV30 is a defferred renderer!!!!!!!!!
and THAT is how they are able to make almost ludicrous performance claims! it all makes sence now!!!!
 
DemoCoder said:
Josh @ Penstar asked about Deferred Rendering and T-Buffer. Gary said any effect done by T-Buffer can be done by pixel shaders and multisample buffer in NV30.

Someone else answered that the NV30 is NOT a deferred renderer or tiler, but instead uses 3rd generation hierarchical Z, etc

:D :LOL:
 
NV30 is a defferred renderer!!!!!!!!!

a.) I've already posted this in another thread saying that its not a deffered renderer; I asked this specific question and the answer was no - more in an upcoming interview...

b.) If you stopped to think about it, if it was a deffered renderer why bother wasting transistors on compression at all? NVIDIA themselve admit that the main benefit for this is with AA, but a deffered renderer can have MSAA for free anyway, which would make compression a total waste.

The fact that it has compression was the indication that NV30 was defintily not a full deffered renderer for me.
 
a nice thought

but i don't think is a deffered renderer, something that big they would have documented in the information sent out about the card...

a nice thought though, maybe NV40 :p
 
ok, even if it isnt a (full) deferred renderer, reducing the depth complexity to one would make algorithm still apply
 
http://www.beyond3d.com/forum/viewtopic.php?p=55697&highlight=#55697

DaveBaumann


I've just got back from the launch - I've not had a look at all this thread but here are 4 things:

1.) Its a 128bit bus.

2.) Its 8x1, not 8x2.

3.) There are no GigaPixel deferred rendering / geometry caching elements. ZCULL can reject more pixels per clock than GF4.

4.) FSAA is similar to GF4 and there's no programmable grids etc.

I interviewed one of the tech guys and assuming my tape caught it all I'll have it up as soon as I can.
 
nAo said:
Umh..I won't say anything at this time, I would like to double check my sources, but if what I was told it's true I do think nv30 will have no prob with bandwith at all..
 
perhaps Nvidia has used elements of the GigaPixel technology
-the elements of GP tech that make sense, while excluding the elements that don't make sense.

I'm surprised though, that GeForce FX doesn't have a better method of FSAA over the NV25, just more samples of it. (right?)


Perhaps we won't see a true deferred renderer from Nvidia until NV40 or NV50.
(thus XBox 2 as well)
 
And perhaps it gets a speed boost from magical pixie dust, that is more likely than there being anything from GigaPixel which would help them with bandwith use as long as they dont build a tiler.
 
Sage said:
nVidia is claiming an effective 48Gb/s bandwidth and everyone has been assuming that its because of their 4:1 colour compression. remember much about Mojo? It was going to be based on gigapixel's defferred renderer. with the ability to cull TO (not with) a depth complexity of one then:

uncullable info (shader progs, geometry, etc) = UI
bandwidth = 16
avg depth complexity = 4
ineffeciency = %20 (?)

((16 - (.2 * 16)) * 4) - UI = effective bandwidth
(16 - 3.2) * 4 = effective bandwidth
12.8 * 4 = effective bandwidth
51.2 - UI = effective bandwidth
48 = estimated effective bandwidth


NV30 is a defferred renderer!!!!!!!!!
and THAT is how they are able to make almost ludicrous performance claims! it all makes sence now!!!!


Yea, I'm wondering how on earth nVidia can make the claim that its "effective" memory bandwidth is 3x that of their physical bandwidth. It's easy to see how the 48gbs/sec "effective" has to be a very limited scenario occuring only under choice conditions, and it's also easy to see that there's no deferred rendering involved--because if either statement was true then nV30 would not need 500MHz ram delivering 16 gigs/sec--heck they could get by with "crappy old" DDR1 or maybe even SDR ram if the product did DR with that kind of "effective" bandwidth. Kind of a contradiction in terms, actually, IMO.
 
Another conclusion would be, of course, that NV30 contains technology from the original Rampage only, which was not a deferred renderer at all.
No GigaPixel technology at all. 3DFX Rampage was a very potent conventional/IMR renderer, IIRC.
 
nVidia is claiming an effective 48Gb/s bandwidth...

Question: in any of the PR material released today, or at Comdex, did nVidia actually assert this 48 GB/Sec number?

Or is everyone (including some web reviewers) carrying it over from prior PR/CineFX docs that may have mentioned it?
 
the key to its not needing as much badwidth pretty much has to be in their LMAIII. i suspect that it may use some GigaPixel tech to detect which objects need to be culled, but is not actually a defferred renderer. however im a little at a loss as to what exactly z-culling is if its not defferred rendering. wouldnt the GeForce3 even be considered to have some degree of defferred rendering?
 
megadrive0088 said:
Another conclusion would be, of course, that NV30 contains technology from the original Rampage only, which was not a deferred renderer at all.
No GigaPixel technology at all. 3DFX Rampage was a very potent conventional/IMR renderer, IIRC.

It's dumb comments like this that get me pissed off - nVidia is so far beyond Rampage it's not even funny. Time to let it go... that being said, I still think we need to wait and see what exactly LMA III is composed of before making any preformance estimates.

Overall, I never expected a full out region based deferred renderer, but am surprised that they seem to have taken such a 'traditional' route. I was expecting something more, from a deferred shader to utilizing temporal and spatial cohesion. Perhaps their ambitions of using the NV3x/CineFX core in Offline, High-End (SGI or Pixar-Like renderfarms as Carmack spoke of)) influenced them. <shrug>
 
I still think we need to wait and see what exactly LMA III is composed of before making any preformance estimates.

Vince, wysiwyg.

We've seen all there is to see about LMAIII because it no longer exists as a piece of marketting they are pushing. LMAIII is now 'Intellisample' and there is nothing more to say.

Keep an eye open tomorrow.
 
Joe DeFuria said:
nVidia is claiming an effective 48Gb/s bandwidth...

Question: in any of the PR material released today, or at Comdex, did nVidia actually assert this 48 GB/Sec number?

Or is everyone (including some web reviewers) carrying it over from prior PR/CineFX docs that may have mentioned it?

Good question and well worth asking. If nVidia hasn't asserted it, it really doesn't deserve any comment. I confess to seeing it elsewhere, and since people were bringing it up here and there today, I was assuming it had been asserted by nVidia during the presentation.
 
DaveBaumann said:
I still think we need to wait and see what exactly LMA III is composed of before making any preformance estimates.

Vince, wysiwyg.

We've seen all there is to see about LMAIII because it no longer exists as a piece of marketting they are pushing. LMAIII is now 'Intellisample' and there is nothing more to say.

Keep an eye open tomorrow.

Thanks bud... Where was the NV30 presentation?

PS. What does 'wysiwyg' stand for?
 
The 48G number came from a document called 'Sneak Peak', which is where I got the details of NV30 that I posted on the news page some months back. '48GB' has not been mantioned in any docs since, IIRC.
 
Back
Top