NVIDIA CineFX Architecture (NV30)

I still think Nvidia took a good approach with the GF4 by implementing a layer of supersampling over there multisampling in high quality modes.

If they indeed put a high performance edge detection algorithm for there next approach, I wonder if they could also do a quick supersample on top of it for alpha problems, and certain boundary conditions where depth SS would be important for quality.

As far as AF goes, when are we going to get a better filter kernel say Gaussian, or Bspline.
 
Well, I still don't think any supersampling is good for real-time rendering, though it may have its advantages for offline rendering.
 
For all those asking about differences between R300 and NV30 pipelines, the SIGGRAPH course notes on hardware shading provide about all the information you'd want (available at www.ati.com/developer and http://developer.nvidia.com/view.asp?IO=IO_20020721_4874).

R300 pixel shader:
inputs: 16 textures, 32 constants, 8 tex coords, 2 vertex colors
12 temporary 128-bit registers
64 ALU instructions, 32 texture fetches, up to 4 dependent reads
ALU instructions are ADD, MOV, MUL, MAD, DP3/DP4, FRAC, RCP, RSQ, EXP, LOG, and CMP
Texture instructions are texld (load), texldp (proj. load), and texldb (biased load)
fragment kill
Outputs in up to 4 buffers

NV30 pixel shader:
inputs: 16 textures, 512 constants, 8 tex coords, 2 vertex colors, screen pos
32 temporary 128-bit registers
any combination of up to 1024 ALU or texture instructions
ALU instructions are R300 set + SIN, COS, DDX, DDY, MIN, MAX, condition codes (SLT, SGT, etc.), and pack/unpack
TEX instructions are TEX (load) TXP (proj. load), and TXD (derivative-based load)
fragment kill
Outputs to Z or color buffers, pack/unpack instructions allow 4 32-bit buffers (or 2 64-bit) to be stored in a 128-bit RGBA buffer

R300 vertex shader:
256 static instructions, up to 1024 executed instructions
256 constants
1 address vector
12 temporary vectors
ALU includes ADD, DP3/DP4, FRAC, LOG, MAD, MAX, MIN, MOV, MUL, RCP, RSQ, SGE/SLT
constant-based loops and branching

NV30 vertex shader:
256 static instructions, up to 65,536 executed instructions
256 constants
2 address vectors
16 temporary vectors
up to 6 clip-planes output
ALU is R300 + SIN, COS
data-dependent loops and branching, subroutines nested up to 4 deep
 
Well, it should be pretty clear that the NV30 is vastly superior for at least some circumstances, then.

Whether those circumstances will be seen in games anytime soon is another story. I'd like to see some developer comments on the differences in the limitations of the two architectures.

Oh, and the nVidia CineFX doc did state pretty clearly that CineFX fully-supported 1024 static instructions, with 64 loops for the 65536 number.
 
Chalnoth said:
Well, it should be pretty clear that the NV30 is vastly superior for at least some circumstances, then.
did you ever think that the reason some people think you are such a <bleep> is your over-use of such "miracle" type terms and hype as "vastly superior", etc?
 
Althornin said:
Chalnoth said:
Well, it should be pretty clear that the NV30 is vastly superior for at least some circumstances, then.
did you ever think that the reason some people think you are such a <bleep> is your over-use of such "miracle" type terms and hype as "vastly superior", etc?

Well, numbers like 512 vs. 32 look like "vastly superior" to me.

Whether or not this manifests itself in games is another story...
 
Oh, and the nVidia CineFX doc did state pretty clearly that CineFX fully-supported 1024 static instructions, with 64 loops for the 65536 number.

Yep, there's a disparity.

According to the ATi docs they also have SIN and COS functions

Hmmm... They're not listed in the SIGGRAPH notes... Are they implemented as ALU instructions, or macro-expanded Taylor series?
 
Nv30s success will depend on pricing, and how there IQ stacks up to ATI's offering. Unfortunately, I suspect the former to be up in the 400-500s judging by the level of advancement this thing looks to be.

Performance is pretty much a moot point at this level of the game, since once again Hardware is surpassing current and even future games by nearly a generation and a half. (Witness the R300's dismantling of UT2003 on immature drivers at high res + eye candy on).

While it might be a nice developer research vessel, consumers are once again dissed by the lack of high tier PC developers.

Here's to hoping Microsoft hurries up with there AAA DX8 Xbox ports for at least some semblance of advancement outside of the Carmack niche.

We need a value DX8 part really badly! ;x
 
Chalnoth said:
Well, numbers like 512 vs. 32 look like "vastly superior" to me.

Whether or not this manifests itself in games is another story...

The issue is, you only do this kind of hyperbole to one IHV.
Look: Right now, you are hyping nvidias(nv30) part, for going above and beyond ATI's (9700).
Last week you were dissing ATI's part (8500) for going above and beyond nVidias (4600).
now which is it?
You cant have it both ways. Is advancement good, regardless of whether games use it? Because thats not what you said about the 8500...
I personally think from the info here, nv30 will be more advanced, sure. will it show in games? i dunno.
But your flip-flopping is extremely irritating. If you could just admit to having some bias, it would be nice, but you deny that. I'm kinda tired of it, frankly.
 
Sure, I have lots of bias. Anybody who says they don't is a liar. I have never denied having bias, either.

Last week you were dissing ATI's part (8500) for going above and beyond nVidias (4600).

Now here I think you're showing your bias. I never dissed the 8500 for going "above and beyond" the Ti 4600. You, along with a few other people here, always seem to twist my words into something I never meant them to be.

You cant have it both ways. Is advancement good, regardless of whether games use it? Because thats not what you said about the 8500...

Let's just say that I've chosen my point of view, and there are many arguments you can lie down in favor or against any technology. I'll leave it up to you people that like ATI to put up counter-arguments (I'd rather not argue against myself here...could get confusing :p )
 
I think that this time its important to remember there are two more Dx9 cards comming early next year. The Next Xabre card and the S3 Columbia.

This clearly changes the face of the market a bit. As Developers will have to make more carefull decisions before they jump on one companies boat. It is likely that a more practical attainable DX9 feature set like ATi has taken will be ebraced by all.

Just like the 1.1/1.3 PS approach was taken by most developers. They may also code for the Common DX9 denominator. what the DX9 spec lists as *look these are the instructions and this is DX9*.. Clearly Nvidia has overstepped the boundries for what 98% of developers will be willing to do. However it is also clear that the next coupple of generational increments will trend more towards the open ended nature of the Nv30.

ATi COO did say he was more excited about the R400. It seems very likely that the R300 was looked at as an intemediary Product that delivered Top notch FSAA and Aniso. R400 will in all probability have even more complexity and open endedness that the NV30/35 While perhaps delivering the performance to actually use it..
 
I thought this year was suppose to be the debut of there Directx 9 parts,guess not.Surely next year will be interesting if you include the following contestants S3' Columbia,Matrox's updated Parhelia,Nvidia's Nv-30 & the updated R300 featuring a die shrink & DDR-II.


BTW is it just me did anyone realize with every new core from ATI we see double the piplines from R100-R300 I wonder if they'll go with 16 pipelines in there R400 cores. :eek:
 
I'm not sure why everyone is getting so excited about these "features". The fact of the matter is unless you run a render farm, this isn't going to effect you at all. Nvidia might sell a few cards to people wanting to render movies, but it's not really going to make one damn bit of difference in the gaming market.

As far as games go this is just going to be another PS 1.4. What's going to sell the card is being faster than the R300, and nothing in this paper suggest anything to support it being faster. Come release we'll see.

I don't mean to rain on anyone's parade, but it seems like a lot of people are blowing this silly paper way out of proportion.
 
SteveG said:
- Nvidia continues to state to investors (under threat of SEC fines and/or investor lawsuits if they willingly present misleading information) that NV30 is their "fall part". So our best information is that we will see NV30 by September at the earliest, mid-December at the latest. Most well-informed speculation (Anand, others) is that it will be November.

...and Nvidia is risking lawsuits by outright lying to investors...

Two words: "Safe Harbour".

If anyone doesn't know what I mean, then look it up in the context of publicly-traded companies. No offense intended, SteveG, but I find it amusing that otherwise seemingly well-informed individuals do not understand what "forward looking statements" really are.

Forward looking statements are nothing more than management's best guesses/predictions about the future. Nothing is engraved in stone until the final press release is issued.

Considering Safe Harbour, the SEC would only be concerned with "material misrepresentations". Minor delays (ie: a few months relative to a forward looking statement) would not likely be considered material misrepresentations.

Investor lawsuits are a different matter, but the vast majority of these are thrown out of court as frivolous. FYI, Safe Habour is more or less the automatic defense against this type lawsuit, although other defenses are also put forward as warranted by the situation.

As for NV30 being their fall part, that only means that they plan to introduce it in the fall (paper launch?) and does not guarantee that any significant volume shipments will accompany the introduction. Given the difficulties that TSMC is experiencing ramping their .13 micron process, I'm starting to have doubts as to when actual availability would occur.
 

I'm not sure why everyone is getting so excited about these "features". The fact of the matter is unless you run a render farm, this isn't going to effect you at all. Nvidia might sell a few cards to people wanting to render movies, but it's not really going to make one damn bit of difference in the gaming market.


That's absolutely true, but I think the renderfarm business is exactly what nVidia is targetting here. I think that DemoCoder hit the nail on the head given the degree to which they surpass DX9 spec in terms of program length, the new high precision operations as well as their recent acquisition and marketing positioning (ie calling it "CineFX").

There's no question that discrete graphics chips is a shrinking business and if these companies want to get to the next billion dollars they need to expand into new areas. ATI has a huge lead in mobile, set top/video, and PDA/cellphone whereas nVidia has high end integrated and console. If accelerated rendering is indeed a $2B pie, then nVidia could take a substantial chunk out of it by having the fastest renderman accelerators.

As far as the other DX9 chips yet to come out, I can't see either of them going beyond the minimum DX9 spec. Based on the design decisions for Xabre, there is no way they are going to commit the extra gates.
 
I'm not sure why everyone is getting so excited about these "features". The fact of the matter is unless you run a render farm, this isn't going to effect you at all. Nvidia might sell a few cards to people wanting to render movies, but it's not really going to make one damn bit of difference in the gaming market.

Exactly but I'll carrier it a step further. These DX9 features from both ATI's and Nividia are more proof of concept than anything. Nothing will take advantage of it to any signifcance until 2-3 years down the road and by that time new cards will be out to make these cards look slow by comparison and the cycle will recontruct itself. Do your wallet a favor and stay generation or two behind.

Incedently, i don't see where Chalnoth has taken the steps to be considered offensive where as someone like Doomtrooper hasn't...
Are these people for real or some type of automated posting bots?
 
Back
Top