NVIDIA CineFX Architecture (NV30)

*Ben watches a productive discussion on new pixel shader instructions and a new architecture turn into a NVIDIA versus ATI fight and sighs heavily

Going back to lurk mode later
 
Doomtrooper said:
Nahhh not really, I have friends that work at ATI and contacts and have a preference..I make NO BONES ABOUT IT.
We know that and apreciate it... :)

Doomtrooper said:
What I don't like is the thinking that certain individuals on this board seem to feel is also appropriate, to claim a winner off some paper specs without even looking at hard data. Not to mention you won't be able to buy this card for another 5-6 months.
The same could've been said about advocating R300 based on leaked specs and rumors 3-4 months ago. What's the difference? Matter of perspective ...

Doomtrooper said:
I'm also dissapointed in going Outside DX9 compliance and going with their own standard, if you think about this... what would be the reason unless CG will be what Nvidia is hoping for to expose this...as Dx9 certainly won't.
DX9 isn't even out yet, and we know nothign about a possibility of DX9.1 or anything the like. Sorry but to me this looks as if it could be DX8.1 all over again with changed roles! Yet I didn't see you faulting ATI for improving upon DX8.0 specs and introducing PS1.4, why is it now suddenly so bad to improve over existing specs, merely because the standard definition comes from ATI this time, not from Nvidia? All of this is speculation, you're trying to paint a negative picture [about Cg and NV30 too] based on rumors and unbased expectations, try to listen to your own advice for once and don't apply double standard thinking all the time ...


Doomtrooper said:
I simply state facts, the R300 is fast, you can pre-order now and it will be great card to own and is certainly not Vaporware.
Like I said in my previous post, I also find the sudden NV30 hyping distracting, even annoying, plus I think the R300 is a great chip and a marvelous product to buy once it hits the shelves (pre-order doesn't mean jack to me, I can also pre-order DVDs that won't be released till christmas!), which will be a good 3 months before NV30 most likely (and it will remain a good purchase well beyond that time). But honestly, do you expect things to be that much different for NV30 3 months from now? Do you really think NV30 is not gonna be fast or well featured or even doubt that is going not to hit stores at all? If you want to use only the "now and here" base for arguing because it suits your view of the topics best, then you can't even include the R300 in all your speculations because not even the press has their hands on real review samples, so it hardly qualifies as a real product yet, pre-order or not.
 
*Sascha hands Ben some pop-corn*

ok, I'm off this thread. sensible discussion seems to be impossible.

later,
.rb
 
alexsok said:
How can u say that it lost when it's MUCH more programmable than R300?!

Because nobody will use this programmability until the majority of cards offer it as well. Right now, DX7 technology level is the "gold standard" to write games for. 2003 it will be DX 8.0 shaders, may be 2004 games will build on a DX9 (R300) feature set.

Besides, I bet NV30 will have higher clock speeds anyway (since it's based on 0.13 process), which alone makes it a winner!

A Pentium 4 was clocked higher than an Athlon XP, it was not faster. Further, it still isn't for sure that the NV30 will have a 256bit memory interface.

Nevertheless, personaly I think NV30 WILL BE faster, but coming half a year later it better would.
 
I actually knew quite a bit about the R300 but of course could not say much, Mufu on the other hand did a excellent job from just looking at engineering samples (I must say Mufu knows his $hit :) )...

All I'm saying is I'm dissapointed at the reaction here, ATI releases a card 2-2.5 times faster than a Ti4600 not to mention all the other advanced things it can do (some people have not even seen yet)...and I expected alot more reaction.....I guess I expected to much from the community (plus the Barrysworld type posts are most aggravating)..

Oh well.. :-?
 
I find it intersting people are already claiming NV30 a dead product even before any official announcement has been made. Basing conclusions on data that isn't officially mentioned as NV30 related is premature. People have their opinions and fanboys will be fanboys. Keep in mind, NV30 is a 120million processor part @ .13 microns (tentative specs)? From a technology standpoint besides API, there is something to be said with that many transistors and die size (ie. clock, thermal output, DDR RAM specs). Until hard benchmarks are put out there, to say ATI is ahead of NVIDIA is again, premature. And then there's the image quality argument with respect to performance, that isn't even a topic being brought up here. With even less data on that aspect, I feel no one here is an authority to say which company has "won" the "race". People will have opinions. People will base those opinions without knowing fully the other side. My take, wait until we know all the facts.

Joe G.
(www.pcrave.com)
 
Mephisto said:
NV30 lost.

Now this is getting ridiculous. NV30 is not yet released, yet somehow it has lost the race. Can we please wait to see performance numbers before we pass such an ultimate judgement?

E.g. if NV30 does 4X FSAA for free and is otherwise similar in peformance to R300, then will it have lost?

Mephisto said:
NVIDIA added complexitiy to the part nobody except Maya/3DStudio/SoftImage users will benefit from. For 3d games, CineFX with its instruction count overkill is basicly useless during the lifetime of this part and its refresh.

And I really can't believe some of you are actually criticizing Nvidia for adding features. So shortsighted! Yes, it is true that developers will not widely use new features until several years after they are introduced. So would you have Nvidia introduce these new features a year from now? Two years from now? Then we would have to wait one or two MORE years for the features to be widely used by developers. Doh! We should have nothing but praise when a chip company raises the bar. It forces other companies to respond accordingly. It accelerates the adoption of new features and technologies. It's called progress. It's a good thing. Don't knock it.
 
Doomtrooper said:
Gollum Geforce 3 Ti was released in Oct of 2001 then the Geforce 4 was released Q1 2002, One year will end Oct 2002.

I am well aware of how time works Doomtropper. You said "this year" which to me means 2002 however. Even assuming you didn't mean "this year" but "one year", I fail to see the point, Nvidia has a 6-month product cycle, that has been known for what, 4-5 years? They've been releasing 3 products within one year (product 1 -> 6 months -> product 2 -> 6 months -> product 3) for what, 4-5 years? I fail to see hwta the great surprise is supposed to be ...
 
SteveG said:
E.g. if NV30 does 4X FSAA for free and is otherwise similar in peformance to R300, then will it have lost?

Yes, because it came half a year late due to its higher and useless complexity which required a process not available right now. R200 also lost over NV20, also it had more features. But it was later on the market and not much faster.

And I really can't believe some of you are actually criticizing Nvidia for adding features

Adding features is OK for me, but NV30 has too much feature to be released on time. Feature creep.

Because NVIDIA added so much features in the part, we have to wait to get our hands on innovations like faster and better FSAA / Filtering.
 
ATI raised the bar with higher precision, improved Pixel Shader support, hydravision, truform..did ATI get any praise for such things..nope. They lost in Quake 3 by 15 fps..thats all that seems to matter.

I have no problem with raising the bar within DX9 compliancy..more support but having specific features outside DX9 only means proprietary...and if people think proprietary is good for the PC industry then they certainly don't like competition in the marketplace. Proprietary hardware/software is a sign of a company trying to OWN their market segment, this particular market segment is PC gaming and I'm sorry I don't want to go to my favorite gaming store and pick up a game and see (Nvidia Eclipse optimized..advanced effects like no other)..I'd much rather see DX9 3D accelerator required ;)
 
Another point, if NVIDIA decides to exceed the specifications of DX9, more power to them. This isn't the first time NVIDIA has done that. This isn't the first time any graphics company has done that. From a business perspective, it is better to outdo your competitors product. Makes sense no? Is it fair? No, but this is business and as long as you meet the standard DX9, that's all that matters. Who is to say that the extended functionality won't be in OpenGL? The beauty in that is programmers can take specific code paths to take advantage of X feature. This is at the programmers discretion and it's theirs to take. NVIDIA is putting it out there as an option. Matrox did that with EMBM and now it's in DX.

My point, is it bad to overshoot what Microsoft sets as standard? Does Microsoft dictate what NVIDIA wants to do? More like, NVIDIA suggests features to Microsoft and they meet on common ground.
 
Ascended Saiyan said:
Maybe we won't have to worry about banding any more for all games from the past & presently. :)
I am not so sure about this. The textures for all your past games are saved in 16 or 32 bits so I would think unless the company releases some 128bit textures for the game, there will be atleast some banding with the textures.

As for in game calculations(light and particles) it depends on the word sizes ATI's registers use. If they use 32bit words then no they probably wouldn't use more registers than they have to. But, if they are 128bit words then all 32bit data sent to the GPU is stored as 128bits and then truncated back down to 32bits for the game to use.
 
Ummm...

So yes, this does appear (as someone else mentioned) to be the R-200 Vs. NV-20 architectures all over again...except the exact opposite in terms of market position and parts:

1) ATI has the DX9 part out first.
2) nVidia may have a more "flexible / advanced" part.

Now, unless Microsoft comes up with DX 9.1 to include the apparnet advanced flexibility of NV30, the extra features won't mean much at all for consumer gamers. And even if DX 9.1 does happen, nVidia is likely to have the the same "problem" ATI did with R-200...general lack of support for their more "advanced" shaders.

So, the bottom line is this: The feature sets (in terms of vertex / pixel shaders) are close enough such that they don't make any difference.

The "winner" (best product) will be the one that puts out the best performance. More flexible / advanced does not mean better performing. (Or price / performance depending on your persepective). So, saying one way or another that "R-300 is better" or "NV30" is better is pointless....we don't even have any indication of the clock speed, pipeline config, AA implementation, or memory interface of the NV30. All we know for sure is that R-300 will be the best until NV30 arrives.

Perhaps once the actual "specs" (clock speed, memory interface, etc.) of the NV30 become public, speculating on actual performance wouldn't be completely fruitless.

It is rather amusing watching the two camps completely filp-flop on their stance with respect to more "advanced" pixel shaders....
 
Gollum said:
Doomtrooper said:
Gollum Geforce 3 Ti was released in Oct of 2001 then the Geforce 4 was released Q1 2002, One year will end Oct 2002.

I am well aware of how time works Doomtropper. You said "this year" which to me means 2002 however. Even assuming you didn't mean "this year" but "one year", I fail to see the point, Nvidia has a 6-month product cycle, that has been known for what, 4-5 years? They've been releasing 3 products within one year (product 1 -> 6 months -> product 2 -> 6 months -> product 3) for what, 4-5 years? I fail to see hwta the great surprise is supposed to be ...

Thats nice as already proven by ELSA and OCZ's quote...

"We have chosen to produce ATi Radeon 8500 video cards as ATi's shelf life of the 8500 far exceeds what NVIDIA has to offer.", said Dan Solomon Jr., VP of Product Development at OCZ Technology Group.
"By releasing our OCZ 8500 Nitro now and our OCZ 8500 Nitro SE soon to follow, we are able to provide a superior product with a longer shelf life then products available from other manufacturers".

6 Month product cycles are not cost effective more board manufacturers.
 
?

the nv30 supports max. 64loops to do up to 65536 vs2 instructions. are the 64loops a dx9 limit? and how many loops does the r300 support?
 
I agree 100% with Joe Defuria.

I was trying to show that price/performance (or performance) is what will be really important. IMHO the only way a 1000 thousand instructions will be interresting is if using a large (100 chips) multichip solution.

But this is really an typical Nvidia x ATI thread, and I am off :(

* I will get a Peperroni Pizza and some Draft Miller :)
 
For my ten cents worth I must say that DT has a point about proprietary standards. But I also believe in a market that moves as fast as the GFX market for true evolution to occur you have to have people who are willing to exceed the current standards, that's how things get driven forward. You'd end up with evoltion by commitee otherwise and that would take forever to get anywhere.

ATi exceeded the DX8 spec and in doing so introduced developers to a new level of shader development, floating point precision and DX9 type shaders. Though obviously not as powerful. It's just sad that ATi chose to do this so close to transition of standards DX8->DX9 that it's going to get overshadowed until DX8.1 is in the main (value) consumer stream.

The other thing is that nVidia are committed to bringing full on Cinematic real time effects to the desktop. If they can finally pull off the Final Fantasy demo at 25fps with the same level of quality as we saw in the film then I for one am going to be glad that such a high level has been reached. Somebody has to force the market place forward and they'll never do that by sticking to a bunch of specs that get updated every eighteen months or so.
 
I think it was a mistake by ATi to announce that they might be producing a .13u version of the R300 so soon after the initial launch. I mean why bother buying one now when you know it's going to be superceded by a fresher revision in 3-4 months time? Especially as DX9 isn't even available yet so you won't even be able to show it off properly!!

They should have kept the .13u option quiet, let the early purchasers blow $400 on ironing out the bugs in the hardware and drivers, get the corp some cash back and then release the revison without warning.
 
True, but u can't really expect NVIDIA to loose ATI in terms of perfomance, right?

Why not? Until now, we "couldn't really expect" ATI to get their part out sooner than nVidia, and look what happened.

I base my opinion on the paper, since the specs released there are mindblowing!

Just look at the specs god dammit and compare them to R300!

1) I don't see any PRODUCT specs, just a tech paper about some architecture highlights.

2) Nothing about the architecture is mindblowing. It's an evolution over the DX9 / R-300 pipeline, much like the DX8.1 is an evolution over DX 8.0. Clearly, the flexibility of the CineFX engine is better than R-300.

However, nothing in those specs will make any difference in terms of the end-user experience (as far as gamers go). I'm sure you'll hear many developer comments about how the NV30 pipeline is superior to R-300 (just as they said the R-200 pipeline is superior to the NV20). But what will matter in the end is how that additional flexibility translates into better performance with DX9 and OpenGL games over the life-span of the card. I don't see how it would.

This isn't to say that the NV30 won't be a better performing card anyway....it very well might be....it might be dobule R-300 performance for all we know. THAT would be exciting to me. But we need to know more physical aspects of the chip and memory architecture to talk about that...
 
Mephisto said:
SteveG said:
E.g. if NV30 does 4X FSAA for free and is otherwise similar in peformance to R300, then will it have lost?

Yes, because it came half a year late due to its higher and useless complexity which required a process not available right now. R200 also lost over NV20, also it had more features. But it was later on the market and not much faster.

I know that both ATI fans and Nvidia fans like to spin the facts to their advantage. But please explain, where are you guys pulling this "half a year late" BS from? Let's examine the facts please:

- Radeon 9700 will be available mid-August at the earliest

- Nvidia continues to state to investors (under threat of SEC fines and/or investor lawsuits if they willingly present misleading information) that NV30 is their "fall part". So our best information is that we will see NV30 by September at the earliest, mid-December at the latest. Most well-informed speculation (Anand, others) is that it will be November.

- Therefore, right now our best info tells us that NV30 will trail R300 by 4 months at the most, and more likely 3 months. Not "half a year." In the end, it could be much less (ATI experiences problems with production ramp-up) or more (Nvidia experiences problems with production ramp-up). But according to the best information we have right now (unless you have some insider info, and Nvidia is risking lawsuits by outright lying to investors), 4 months late is the worst-case scenario for Nvidia.
 
Back
Top