NVIDIA CineFX Architecture (NV30)

tEd said:
?

the nv30 supports max. 64loops to do up to 65536 vs2 instructions. are the 64loops a dx9 limit? and how many loops does the r300 support?
I don't know the answer to your question but I find 65536 interesting. Why do they need to set the limit at 16bits of data? What is the significance of this number as a hardware limitation?
 
I think it was a mistake by ATi to announce that they might be producing a .13u version of the R300 so soon after the initial launch.

I don't believe ATI ever said any such thing. (The "internet public" has been spreading that rumor around.) All that ATI has said publically is that there will be a 9500 product by the end of the year, that we can expect to be more mainstream priced part based on the 9700.

The "internet" is setting up ATI for taking some heat...Mark my words...when NV30 is released, you'll see a whole bunch of "so where is the faster 0.13 R-300 that ATI said would be here when the NV30 comes out." Problem is, ATI never said it. The community "assumes" that ATI will come up with a 0.13 R-300 at some point (which seems logical enough), and the "ATI fanboys" then take that assumption and extrapolate it further to be "in time to compete with NV30".
 
Joe DeFuria said:
1) I don't see any PRODUCT specs, just a tech paper about some architecture highlights.

I would like to know more about that FP16 and FP32 modes though - when they talk of performance, if they are talking about more than just memory performance, it could imply a performance hit when running in DX9's native PS2.0 128bit (FP32) format.
 
The difference is everything...

Right now we know that ATI's part is real and will be shipping shortly (unless ATI really, really goofs.) Preliminary tests show that the existing drivers work pretty well so far (a first for ATI.)

nVidia, on the other hand, is reduced to white papers and a product so far from shipping that no specifications for it exist. I think it's a fair bet to say that the chip hasn't even been successfully taped out yet.

R300 = shipping product

nV30 = white papers

Pretty big difference it seems to me.

Personally, the only thing that might keep me away from a 9700 is R300 driver support. If ATI can overcome that, it seems to me that nVidia will have a hard time catching them. In this market timing is everything--it is absolutely critical. If you don't believe me, just ask 3dfx what a difference it would have made if the V5 5.5K would have shipped in December instead of the following July. I think it literally would have made the difference in the survival of the company.
 
Having read a few rather astonishingly distorted commentaries on the R300, I am struck by how the exact same slant can be applied to the NV30. In such cases, I think the comments are exactly as inappropriate to use to knock the NV30 based on its imagined paper specs are they are to knock the R300 based on these same imagined NV30 specs.

I don't see a problem with rampant speculation, but it seems unnecessary to use rampant speculation to passionately counter rampant speculation, atleast if they are both based on the same (lack of) information as seems to be the case. I do think it is fair to criticize rampant speculation for consistency and logic, but it seems the criticisms are tending not to stop there and to continue on to replace one set of speculation with another with equal (lack of) basis.

EDIT:
BTW, if I were to take a stance, I'd be defending the NV30 against those who criticize the enhanced functionality just as much as I defended the R300 against the Barrysworld editor's mindset, as they seem to be pretty much the same problem to me. Yes, it is a double standard to apply criticism to PS 1.4 and not NV30's presumed enhancements, but it seems to me that it being wrong for PS 1.4, which I clearly think it was, makes it clear it is wrong for the NV30, especially when we don't even have solid specs yet. This leaves room for arguing about the relative usefulness of the enhanced functionality, certainly, but blanket criticism of it seems preposterous.
 
Dam,
JR I did not get a chance to read all of that fanboy stuff now I am gonna to have to watch the soaps :)

J/k


Oh well. I think its great the nV has more "features" but to me, until they are used in games, then they are not really features at all. when I bought the 8500 I new full well that its dx8.1 and PS1.4 probably would not see the light of day before I replaced it. Now I am looking to buy new card. I can hope that these new features are used before the life time of the card when I get it. But passed on past experince that has yet to happen except for Truform. All other "feature" never made it before I replaced the card with something faster, better and with more "features". Again kodos for nV to make a more fexible product. And yes I realize some people hold on to their cards much long so their chances of those "features" being used is/are much greatter then mine.
 
Re: The difference is everything...

alexsok said:
ATI could NEVER overcome NVIDIA!

What they have now is an advantage over NVIDIA, since they announced all these new technologies and incorporated them into the chip earlier than NVIDIA, but since today's release of the paper, they don't hold that advancement anymore (at least till first NV30 boards ship, btw, R300 board didn't ship as well!).

P.S
Why were my posts in this thread removed?

I would ask you to read your post again, from beginning to end. Perhaps you really can't see why the very first sentence and last sentence seem odd to me, but I would urge you to reflect on it a bit if you have the time...
 
Re: The difference is everything...

alexsok said:
ATI could NEVER overcome NVIDIA!

P.S
Why were my posts in this thread removed?

Because, and let me be blunt, fanboys_are_not_welcome here!

Either change your posting style or please stop posting here. There are plenty of fansites where you'd be/feel more comfortable.
 
Doomtrooper said:
Thats nice as already proven by ELSA and OCZ's quote...

"We have chosen to produce ATi Radeon 8500 video cards as ATi's shelf life of the 8500 far exceeds what NVIDIA has to offer.", said Dan Solomon Jr., VP of Product Development at OCZ Technology Group.
"By releasing our OCZ 8500 Nitro now and our OCZ 8500 Nitro SE soon to follow, we are able to provide a superior product with a longer shelf life then products available from other manufacturers".

6 Month product cycles are not cost effective more board manufacturers.
I can't believe I am actually replying again, but there you go:
Good quote, you obviously know your quotes, but what the heck does it have to do with the topic of this thread AT ALL? I don't see how or why you suddenly brought this up, this thread is about NV30/R300 specs, yet for some twisted reason you felt the need to bring up how Nvidia's product cycles are too fast for manufacturers? It seems more than just a little bit off topic, but maybe that's just me...

Either way, I stand by my first post, I find it disturbing that people are suddenly hyping the NV30 to be much better than R300 based on some white-paper about pixel- and vertex-shader technology. Could we please postpone judgement until we know a little more about it? Pretty much the only thing we can now say is that NV30 seems to be slightly more advanced from a flexibility point-of-view. We know nothing about performance, IQ, FSAA/Aniso models, bandwidth or any other specs yet for certainn a bit early to assume anything. E.g. if it ends up having a 128-bit memory-bus it could very well end up being slower than R300, especially with FSAA (MSAA takes almost no fillrate hit, but still requires a lot of bandwidth), maybe siggraph means more leaks concerning NV30 specs, then we might have more facts to base speculation upon ...
 
Qroach said:
What I'm very interested in is nvidia acquiring the software company Exluna. they apparently own the Blue Moon Rendering Tools.

Exluna also owns Entropy, the #2 Renderman renderer. It is serious competition to PRMan; Pixar is engaged in several lawsuits because their profits are endangered.

Beyond DX9... can you say hardware accelerated Renderman? For movie effects? Within a few years, the CGI industry can say goodbye to normal renderfarms? Renderman on Xbox2?

I can't wait to see the reactions from the industry. This is mind-blowing.
 
DemoCoder said:
Maybe Geometry Displacement Mapping merely means that the displacement value (sampled from the displacement map) is available in a register in the vertex shader so that the vertex itself can be displaced?

Isn't this the case with the Matrox DM? There you only get the already displaced vertex?
 
Joe DeFuria said:
I think it was a mistake by ATi to announce that they might be producing a .13u version of the R300 so soon after the initial launch.

I don't believe ATI ever said any such thing. (The "internet public" has been spreading that rumor around.) All that ATI has said publically is that there will be a 9500 product by the end of the year, that we can expect to be more mainstream priced part based on the 9700.

The "internet" is setting up ATI for taking some heat...Mark my words...when NV30 is released, you'll see a whole bunch of "so where is the faster 0.13 R-300 that ATI said would be here when the NV30 comes out." Problem is, ATI never said it. The community "assumes" that ATI will come up with a 0.13 R-300 at some point (which seems logical enough), and the "ATI fanboys" then take that assumption and extrapolate it further to be "in time to compete with NV30".

The inquirer already posts rumours about 9500 being a 0.13 chip.
I think ATI already have full plans for a 0.13 transition and will be able to launch as soon the 0.13 process is ready. Problem here is if 9500 has the same features as 9700 it will be no more need of the 9700.

I'll make a guess that R9700 will have a short life, and R10000 and R9500 will be available around newyear.
 
Re: The difference is everything...

John Reynolds said:
Because, and let me be blunt, fanboys_are_not_welcome here!

Either change your posting style or please stop posting here. There are plenty of fansites where you'd be/feel more comfortable.
[ cue Star trek type music]"Fanboydom,The final frontier these are the voyages of Beyond 3D.Its mission;To explore strange new graphic architechtures, & to seek out unbiased readers.To go where no website has gone before[/End Star trek type music]. :eek:


I am not so sure about this. The textures for all your past games are saved in 16 or 32 bits so I would think unless the company releases some 128bit textures for the game, there will be atleast some banding with the textures.

As for in game calculations(light and particles) it depends on the word sizes ATI's registers use. If they use 32bit words then no they probably wouldn't use more registers than they have to. But, if they are 128bit words then all 32bit data sent to the GPU is stored as 128bits and then truncated back down to 32bits for the game to use.

What I meant was since the the pipelines are now floating point there is a much lesser chance of banding occuring compared to current interger based pipelines.
 
Let me summarize the differences between NV30 and R300 as I understand them:

R300 vs. NV30
- pixel shader instruction count
160 vs. 1024
- Displacement Mapping
DX9 vs. DX9 + another one
Framebuffer formats
40bit Integer vs. 64/128 bit FP
- manufacturing process
0.15u vs. 0.13u
- availability
August 2002 vs. who knows

What I'm not sure is:

- Can the R300 vertex shader perform data-dependent branches like the NV30?


Did I miss something?
 
Re: The difference is everything...

demalion said:
alexsok said:
ATI could NEVER overcome NVIDIA!

What they have now is an advantage over NVIDIA, since they announced all these new technologies and incorporated them into the chip earlier than NVIDIA, but since today's release of the paper, they don't hold that advancement anymore (at least till first NV30 boards ship, btw, R300 board didn't ship as well!).

P.S
Why were my posts in this thread removed?

I would ask you to read your post again, from beginning to end. Perhaps you really can't see why the very first sentence and last sentence seem odd to me, but I would urge you to reflect on it a bit if you have the time...

I exagerrated, I agree, sorry about that, but there were a LOT of fan posts here besides mine. Did u delete them as well?
 
Mephisto said:
Let me summarize the differences between NV30 and R300 as I understand them:

R300 vs. NV30
- pixel shader instruction count
160 vs. 1024
- Displacement Mapping
DX9 vs. DX9 + another one
Framebuffer formats
40bit Integer vs. 64/128 bit FP
- manufacturing process
0.15u vs. 0.13u
- availability
August 2002 vs. who knows

What I'm not sure is:

- Can the R300 vertex shader perform data-dependent branches like the NV30?


Did I miss something?

My results :) :

1. 1024 (Instruction count): I think it's a bit meaningless BUT step ahead, sure.
2. another one (DM): I can't see where the point is here... :-?
3. 64/128bit frameb.: that would be great but needless
4. 130nm: yes, NV scores. :)
5. August: ATI scores. :)

IMHO, (IF the whole story is true), in December NV will have the upper edge - but only with a small advantage.
 
DaveB,

For one thing, FP32 is almost certain to have greater instruction latency for operations like sin, cos, exponentiation, logarithm, division, square root.

I would really like to see more details of a pixel shader unit... what are the instruction dispatch rules... are all these instructions pipelined, etc...

Regards,
Serge
 
Re: The difference is everything...

alexsok said:
ATI could NEVER overcome NVIDIA!

What they have now is an advantage over NVIDIA, since they announced all these new technologies and incorporated them into the chip earlier than NVIDIA, but since today's release of the paper, they don't hold that advancement anymore (at least till first NV30 boards ship, btw, R300 board didn't ship as well!).

P.S
Why were my posts in this thread removed?

Alexsok, take a look around the forum.
Out of the hundreds who visit here:
Noone refers to a MadOnion page.
Noone post their system specs in their .sig
Plus, there are no threads where people post a new driver, and then give their new benchmark scores.

This is a different kind of place.

Take for instance your statement above
ATI could NEVER overcome NVIDIA!
You state an absolute - basically a faith/religious kind of statement. When such is applied to gfx-corporations people around here regard it as distasteful to repulsive/disgusting. And always pathetic. No matter the corporation involved. ATI/nVidia AMD/Intel PS2/GC/XBox Ford/Chevy ... there will always be people who like to argue these things. Hell, so do a lot of us as well. But not in all places. There are other forums suited to that. Here, that kind of bickering would tend to drive away all those that frankly have better things to spend their time on. We don't want that. We want this to be a place for discussion and education. A place where, basically, if you don't have a relevant question, or have something to contribute, YOU SHUT UP.

It would get awfully stuffy except for the fact that even though the people around here are typically mature and intelligent, they still have a sense of humour, are pretty easygoing, and occasionally get quite emotional. ;)

Look around, pay particular attention to the tone set by the people involved in this site, and if it isn't the place for you, move on to something better.

Entropy

[EDIT: Time to remove this. No way to delete your own posts?]
 
I think you guys are way off base when talking about NV30's "useless" extra features in the NV30. It is clear that NVidia did not intend these for games (DX9) but is positioning itself to take the RenderFarm/Workstation market by storm.

Evidence:

1) Constant talk of doing offline rendering in real time in marketing
2) CG Language
3) Aquisition of ExLuna (best RenderMan renderer, even better than PRMan in some regards)
4) Siggraph papers


The listed R300 specs "go beyond DX9 with proprietary features". For example, the R300 can have pixel shaders up to 160 instructions long, but from the last DX9 sdk I looked at, the limit is actually 96.

If NVidia can deliver say, a 10-30x speedup in rendering over CPU based approaches, they will take a huge chunk of the roughly $2b renderfarm market. They will also own the workstation desktop at Pixar, Pacific Data Images et al.

My guess is, to keep NVidia's revenue growth, they need to squeeze more money out of new markets. The PC game market is saturated and they won't get much additional revenue levels out of that market.
 
Back
Top