NVIDIA CineFX Architecture (NV30)

Gollum said:
because just like NV30 isn't gonna be the "end all" graphic chip, R300 isn't either ...

Heh, what chip ever was or will be the end of of graphics chips? ;) As long as comps are around, advancements will always be around.

NV30, meh.. whatever. Getting worked up over some advance programablilty, and features. Big whoop. Where is the actual specs for the card? Uh, nowhere.

Gonna wait till some actual specs come out - get tired of speculation.
 
Where is the evidence for a programmable tesselator btw, I haven't seen anything remotely official about it. Did I miss it somewhere in the SIIGRAPH papers?
 
Hellbinder[CE said:
]Just like the 1.1/1.3 PS approach was taken by most developers. They may also code for the Common DX9 denominator. what the DX9 spec lists as *look these are the instructions and this is DX9*.. Clearly Nvidia has overstepped the boundries for what 98% of developers will be willing to do. However it is also clear that the next coupple of generational increments will trend more towards the open ended nature of the Nv30.

Now I think this is kind of funny. These are the exact same arguments that I made back when the Radeon 8500 was released (in different words, of course).

As I said before, however, I'd really like to see some developer comments on the NV30's featureset. For example, it might turn out that developelers don't think that the R300's features are enough over DX8 hardware for it to be worth their time to program for (granted, I find this prospect very highly unlikely). But, the point is, I don't know of a single full-time game developer that frequents this board, and I'd really like to know what they think of the NV30.

Yes, it is very, very true that the NV30's additional programmability will only be useful if it's used in actual games. But who decides how these features are put into those games?

As a side note on PS 1.4, it appears that most developers are merely going to use the spec for supposedly increased performance on 8500 cards (I say supposedly because the GeForce4 still tends to outperform the 8500's, according to JC and UT2k3 benches). However, DOOM3 does have engine support for one feature that the GF3/4 cards do not have: increased precision for certain color operations (I believe it's used for specular lighting in this scenario). So, while PS 1.4 doesn't get you much, it can get you something. Me, I am still glad I got my GeForce4 Ti 4200, for a number of different reasons.

So, it is almost certain that at least a few games in the future will use the increased programmability of the NV30 over the R300. The questions we have to ask are when will they be out, how much of an improvement will we see, and how many games will use these features?
 
Nagorak said:
I'm not sure why everyone is getting so excited about these "features". The fact of the matter is unless you run a render farm, this isn't going to effect you at all. Nvidia might sell a few cards to people wanting to render movies, but it's not really going to make one damn bit of difference in the gaming market.

As far as games go this is just going to be another PS 1.4. What's going to sell the card is being faster than the R300, and nothing in this paper suggest anything to support it being faster. Come release we'll see.

I don't mean to rain on anyone's parade, but it seems like a lot of people are blowing this silly paper way out of proportion.
Democoder said that this particular forum is about 3D technology, not about games. Wavey has said that Beyond3D reviews will be technical in nature, not on gameplay. This "paper" states, to me, NVIDIA's intentions that is broader than giving gamers what they want and/or appreciate more.

This "paper" is not "silly" for those that think this forum is not simply about games. It is not even "silly" to the forum visitors of 3DGPU or NVNews... just incomprehensible and hence leads to a "don't care" attutude.

To me, the R300 and this NVIDIA paper suggests the way forward, the way you will be gaming... not the way you game.

Those that like technology will like this forum, those that likes gaming may not.

But you echoed what I'd said - speed sells since that is the basis of how new video cards are reviewed.

For more than a year now, the new video cards are really aimed at one "market" - the developers. The gaming public do benefit from these new video cards and that is essentially speed... speedier "out-of-the-box" experience in terms of aniso and aa. But the gaming public should not care about new features that need developer support upon the initial release of such new video cards... they should only care about speed and any extra "out-of-the-box" new features. That's where most websites are relevant.

Hopefully, Beyond3D is not like the other priveleged websites for the audience it appears to target.
 
As I said before, however, I'd really like to see some developer comments on the NV30's featureset. For example, it might turn out that developelers don't think that the R300's features are enough over DX8 hardware for it to be worth their time to program for (granted, I find this prospect very highly unlikely). But, the point is, I don't know of a single full-time game developer that frequents this board, and I'd really like to know what they think of the NV30.

Yes, it is very, very true that the NV30's additional programmability will only be useful if it's used in actual games. But who decides how these features are put into those games?

The very idea that you are still propagandizing the idea that the R300's specs are *not a big enough leap over dx8* is well.. I cant say what it is as there may be childrens DX8 code present...

Secondly the Nv30's addintional programability is NOT usable in games. Coding shader effects that legthly would cripple any game now or in the near future. Have you looked at them? or read ANY of the other technical comments on this subject???

Please look at the 9700 dems again and tell me that the effects are not a quantum leap beyond what current shader tech offers....
 
From toms hardware...

Each pixel rendering pipeline of Radeon 9700 is a separate pixel shader. Following the new PS 2.0 spec, those shaders can run programs of up to 160 instructions. Each pixel shader program can do up to 32 texture sampling operations on up to 16 different texture maps and an additional 64 color operations per pass. The amount of clock cycles per pass is, of course, variable and can certainly reach rather high numbers as well, especially when anisotropic filtering is used at the same time. When the 160 instruction limitation should turn out to be too small, the result can be fed into the pixel shader for another pass without losing any precision, since the result can be handled in 64 or 128-bit floating point precision.

We learn a few things here..

First, the R300 follows the PS 2.0 spec.. of which there IS NO OTHER OFFICIAL SPEC HIGHER nor WILL THERE BE LATER WITH DX9. So being the new spec... IT IS ENOUGH FOR DEVELOPERS.....so please enough with trying to say that the very PS2.0 spec that everyone has been anticipating for MONTHS is suddenlly to limmited to use before DX9 even ships.

Secondly, it can do more than 160 instrucions...

Thirdly, It Does indeed support 64 as well as 128bit operations
 
This "paper" is not "silly" for those that think this forum is not simply about games. It is not even "silly" to the forum visitors of 3DGPU or NVNews... just incomprehensible and hence leads to a "don't care" attutude.

I will call you on this quote, from now on please refrain from using games in your reviews..something I followed from your old Voodooextreme articles. Your words..not mine.
The 'silly' part of this equation is another IHV besides Nvidia developed a better shader version PS 1.4, Hardware tesselelation and nobody batted a eye lash, Nvidia releases some PR about their next GPU and everyone thinks its the second coming of Christ...how hyporcrytical..very.

You are correct with one statement though...whoever scores better in 3 year old Quake 3 will win :rolleyes:
 
Originally posted by DemoCoder:
ATI went above the limit with 160

I don't think so (sometimes ATI refers to their limit as 64 color + 32 texture, sometimes they refer to the DX9 limit as 160 and the DX8.1 limit as 28 ). I think it all depends on how you count instructions:

R300 does 64 RGB + 64 alpha and 32 fetch according to all documentation released so far. Whether you count this as 96 or 160 depends on how optimistic you are that scalar and 3-vector operations can be paired in a meaningful way. In the same sense, PS1.1 supported 20 operations (4 fetch, 8 RGB+8 alpha).
 
HI Chalnoth,
Chalnoth said:
Oh, and the nVidia CineFX doc did state pretty clearly that CineFX fully-supported 1024 static instructions, with 64 loops for the 65536 number.
. . . whereas the latest NV Siggraph paper (found here) "only" implies 256 static ops.

More vertex program instructions (128 today, 256 soon)
[...]
Branching & call-return in vertex programs, 64K dynamic vertex instructions executed per vertex

edit:

Speculation--the CineFX paper speaks of NV3x and "1024 static instructions max". The overview paper above doesn't mention NV3x at all, just hardware "soon" to be released which should feature 256 static instructions in the VS. Hence the first NV3x part (NV30) may feature 256 static instructions, whereas later NV3x parts (NV31, NV35) may aim at 1024 static instructions. /edit

ta,
-Sascha.rb
 
I am still trying to figure out how we can go from hardware meeting industry standards like PS 1.0, 1.1,1.3,1.4.. etc.. To one company blowing the entire industry standardization off. And somehow still being praised by the masses.. If you would have suggested PS 1.1+ a year ago you would have been laughed to scorn...

ironic that they have used this not following industry standardization as a bludgeoning tool on others for many years...

Is it now not perfectly clear that they developed Cg and the Nv30 to thumb their nose at the entire industry? How can anyone still suggest that Cg was ment to benefit anyone but themselves. On the other hand Rendermonkey is a tool set that any company adhearing to the DX9 spec can use.

Dont even play that "its a language" game. It is crystal ceear what Cg's real purpose is. It is nothing more than a modernization of the Glide concept to make an many developers as possible proprietarily bound to Nvidia hardware. The *plug in* thing is bogus. Follow the damn industry specs or get off the damn truck.
 
First, the R300 follows the PS 2.0 spec.. of which there IS NO OTHER OFFICIAL SPEC HIGHER nor WILL THERE BE LATER WITH DX9.

Don't do a Teasy and bet body parts on that. Last I heard DX9 has been delayed, again. Now think why...
 
Doomtrooper said:
This "paper" is not "silly" for those that think this forum is not simply about games. It is not even "silly" to the forum visitors of 3DGPU or NVNews... just incomprehensible and hence leads to a "don't care" attutude.

I will call you on this quote, from now on please refrain from using games in your reviews..something I followed from your old Voodooextreme articles. Your words..not mine.
The 'silly' part of this equation is another IHV besides Nvidia developed a better shader version PS 1.4, Hardware tesselelation and nobody batted a eye lash, Nvidia releases some PR about their next GPU and everyone thinks its the second coming of Christ...how hyporcrytical..very.

You are correct with one statement though...whoever scores better in 3 year old Quake 3 will win :rolleyes:

Scott, I think you completely missed Rev's point. This forum is about much more then gaming, therefore dismissing a rendering technology just because it does not necessary benefit gaming (as some have done) is silly.
 
Games are 90% of the reason why we are all here, PC gaming is what I thought Beyond3D is about and was about years ago..with Dave Barren.

PC games used to drive technology forward, now its the other way around..comments about 'people think its silly becuase they don't understand it' is all I need to read.
You don't need to be a rocket scientist to understand how:
DX9 Specifications is 96 instructions...

928 Extra Instructions above DX9----->Nvidia CG-----> NV30

If I took Reverends post out of context then I apologize but that is how I read it.
 
Follow the damn industry specs or get off the damn truck.

That's a pretty myopic view, especially when the industry spec is pretty closely aligned with a particular architecture's peculiarities.

Hardware is being designed long before industry specs are proposed, let alone finalized. If R300 and NV30 entered the design process after Microsoft decided what features it would allow GPU manufacturers to include, you wouldn't see any DX9 hardware for another 18 months.

Of course, nobody is saying you can't write Taylor series for your SIN and COS needs, ignore nested subroutines, branch only on constants, and limit your dependent texture fetches to a maximum depth of 4 on next-gen NVIDIA hardware.
 
DaveBaumann said:
Don't do a Teasy and bet body parts on that. Last I heard DX9 has been delayed, again. Now think why...

This is just what I have been wondering ever since the slip of the first DX9 beta this winter.

This is want I think happened during the DX9 development:

1) Microsoft gets upset with nVidia over IP-issues to DX8 in regards to the Xbox (was it the NV pixel shader-implementation - can't remember...) Also Microsoft feels that nVidia charges to much for the Xbox-chip.

2) Microsoft decides to teach nVidia a lession about who can strongarm who: So ATI's R300 gets to be the development target/platform for DX9.

3) Microsoft and nVidia gets on a better level. Somewhere down the line nVidia comes up with Cg - then nVidia and Microsoft decide to work out the HLSL together.

4) Microsoft is worried about the potential success of OpenGL 2.0, while nVidia have to see 3Dlabs - and to a certain extent ATI - form the OpenGl 2.0 spec. Microsoft and nVidia again find them in bed together.

5) Microsoft can see the possibilties in the prospect of NV3x + DX9 taking the film industri by storm down the line (and to counter OpenGL 2.0). So they decide to include more of the NV3x features into DX9 (although they stick with their minimum specs).

6) DX9 is delayed until November.

Viola! :eek:

Edit: I don't know if #4 should come before #3... :p
 
Back
Top