NVIDIA CineFX Architecture (NV30)

multigl2 said:
you fail to understand that neither company actually produces the chips. The fab company's burden the cost of production, because they have contracts with ATi and NVIDIA to produce X amount of chips... so technically, since neither ATi or NVIDIA produce their own chips, they don't get hit with fabrication size costs --- although if they wanted a contract for a chip with 200Mt's on .15, they might have to pay the premium ;)

Of course ATI and nVidia have to shoulder the fabrication size costs. They have to pay TSMC money, don't they?

After all, most of the cost, once full production has begun, is in the silicon wafers. This is most of the cost that TSMC passes on to its customers as well.
 
agreed, and contract cost for ATi's R300 maybe higher than NV30... but say, if TSMC f's up a lot of R300 mistakingly cause its .15 --- they still have to meet that X amount of requirement by contract, at original cost AFAIK.
 
Well, we'll see. If it's anything like PS 1.4, then these features will be used for additional performance (mostly).
IMHO I doubt.

Note that it's well designed "for such a monster chip." The NV30 doesn't have many of the problems that the R300 had during design, since it's being built on a more "roomy" process, but nVidia has more engineering muscle, and a better track record. And the extra power is not an advantage. It's a necessity based on the monstrous transistor count combined with .15 microns and high clock speeds.
First, it is well designed because it get 300Mhz with 107millions transistors and .15 micron process.
Second, how do you know what problems Nvidia have with NV30?
Third, the extra power is a necessity AND a advantage from the performance viewpoint.

The real question is: What memory bus will be used with DDR2? It's very possible that neither card will be capable of 256-bit + DDR2, but will do 256-bit + DDR or 128-bit + DDR2, similar to some MX cards.
Maybe we will see some 256bits DDR-II :)

Equal? I don't think so. The R300 has quite a few more than 96 million transistors. Also, while the .15 micron process will be cheaper in the short-run, the .13 micron process will be cheaper in the long-run, especially since ATI is going to have to design two chips just because they chose to do .15 micron first.
107/96 = 1.11 or 11% and it is not much, I consider more or less equal. Any doubts?

In other words, the .15 micron R300 may be cheaper to produce at the launch of the NV30, but the NV30 will be cheaper to produce a couple of months later (And market price will depend more on demand than production costs...which means the NV30 will likely be a bit more expensive due to nVidia's proven track record and higher acceptance).
It will not change in a couple of months :rolleyes:

Chalnoth, Why dont you try to balance your posts. It is so much Nvidia oriented. Just in case you want to know I have a GF3Ti200.
 
IMHO I doubt.

You don't think there will be any benefit to the new pixel/vertex shaders in the NV30? I'd say that's highly presumptuous.

First, it is well designed because it get 300Mhz with 107millions transistors and .15 micron process.
Second, how do you know what problems Nvidia have with NV30?
Third, the extra power is a necessity AND a advantage from the performance viewpoint.

But the .13 micron process should allow higher clock speeds without the necessity of external power. Meaning less cost and higher performance.

Maybe we will see some 256bits DDR-II

It's a possibility, just not a very likely one. DDR2 will likely be used to cut costs in this first generation.

107/96 = 1.11 or 11% and it is not much, I consider more or less equal. Any doubts?

A few things. A fairly small percentage can have a significant effect on the number of chips per wafer, since wafers are round, and chips are square.

And 11% is pretty significant when you're talking about chips this size. Besides that, we don't know whether or not the NV30 is using more layers. If it is using more layers, then its die size could be even smaller.

Chalnoth, Why dont you try to balance your posts. It is so much Nvidia oriented. Just in case you want to know I have a GF3Ti200.

Wolverine: Magneto is right. A war is coming. Are you sure you're on the right side?
Storm: At least I've chosen a side.
 
You don't think there will be any benefit to the new pixel/vertex shaders in the NV30? I'd say that's highly presumptuous.
For the next 3 years for games IMHO no.

But the .13 micron process should allow higher clock speeds without the necessity of external power. Meaning less cost and higher performance.
Ok let me translate what I was trying to say.
Lets say the limit is 10W and R300 is using 12W and NV30 use the limit 10W. Lets say the performance advantage is 30% but now 20% is over because of the extra power. Now the smaller process advantage is small.

We dont know how much extra power R300 requires or how much NV30 will require, but I hope you understand the scenario.

It's a possibility, just not a very likely one. DDR2 will likely be used to cut costs in this first generation.
It is a possibility.

A few things. A fairly small percentage can have a significant effect on the number of chips per wafer, since wafers are round, and chips are square.

And 11% is pretty significant when you're talking about chips this size. Besides that, we don't know whether or not the NV30 is using more layers. If it is using more layers, then its die size could be even smaller.
IIRC it use more layers but the 20% reduction is TSMC info. http://www.tsmc.com/english/technology/t0101.htm
Yields are more important now.

Wolverine: Magneto is right. A war is coming. Are you sure you're on the right side?
Storm: At least I've chosen a side.
Life is more than that.
 
pascal said:
For the next 3 years for games IMHO no.

Are you a game developer? If not, your opinion means nothing. I think that there's a good possibility of some use within the next year, but I'm certainly waiting on some gamedev comments to know fo sure. Right now, all that we know for sure is that the NV30 holds the potential for improvements in games based on its superior programmability.

Ok let me translate what I was trying to say.
Lets say the limit is 10W and R300 is using 12W and NV30 use the limit 10W. Lets say the performance advantage is 30% but now 20% is over because of the extra power. Now the smaller process advantage is small.

If the NV30 needs extra power to run at optimum performance, then it will get it.

IIRC it use more layers but the 20% reduction is TSMC info. http://www.tsmc.com/english/technology/t0101.htm
Yields are more important now.

And, btw, don't forget that we're not talking about the same achitecture. The NV30 was designed for the .13 micron process.

Life is more than that.

So? I like to argue. That's why I'm here. I fill up the rest of my life with other things I like to do :)
 
multigl2 said:
you fail to understand that neither company actually produces the chips. The fab company's burden the cost of production, because they have contracts with ATi and NVIDIA to produce X amount of chips... so technically, since neither ATi or NVIDIA produce their own chips, they don't get hit with fabrication size costs --- although if they wanted a contract for a chip with 200Mt's on .15, they might have to pay the premium ;)

I'll have to disagree with that. There is (generally) no "contract for parts" that is signed. The IHV pays for the mask set (approximately $500-750K for .13u, slightly less for .15u), the IHV pays for each wafer that is produced. The yield is almost entirely the IHVs risk. There is obviously engineering work done on both sides to solve yield problems, since both parties want the chips produced and want to continue the business venture, but unless the yield fallout can be directly attributed to the fab, the IHV is SOL. I have been involved in projects where the fab did guarantee yield because of factual misrepresentation in the design phase (i.e. to avoid a lawsuit). However, there was a lot of work that went in to "prove" who's fault it was.

The fab will merrily allow an IHV to produce a chip with 200M transistors and won't guarantee any sort of yield.
 
Thought....

Joe DeFuria said:
As far as the manufacturing process, this means that it shouldn't be hard for nVidia to produce a part that is better than the R300 in every other way.

I disagree. It will be very hard. Not that it can't be done, but very hard, given the transistor bugets.

Based on public info, NV30 only has about 10 million more transistors than the R-300. (about 120 mil, vs. 110 mil).

In other words, these companies were working with very similar transistor budgets, so it WOULD be very surprising, IMO, if either part was "better in every way" than the other.

The difference would be more substantial if the 2D engine of the graphics chip were emulated in the pixel shader. Seeing as how ATI is beginning to do just that with FULLSTREAM, the more flexible NV30 architecture may be able to completely supplant the 2D portion of the asic. This may buy Nvidia another 1-10 million transistors (I have no idea what a modern 2D core consumes in transistors) which could give them a higher than expected '3D transistor' differential. Surely the greatest barrier to doing this is developing new 2D drivers, but doing so can't be that bad, certainly easier than a 3D driver or developing a shader language and compiler.

If NV30 doesn't do this, I absolutely expect someone to do it soon. If you can squeeze Final Fantasy class rendering through the shader pipes, I ask why can't they do the same for my Win2k GUI :)

-Toasty
 
Are you a game developer? If not, your opinion means nothing. I think that there's a good possibility of some use within the next year, but I'm certainly waiting on some gamedev comments to know fo sure. Right now, all that we know for sure is that the NV30 holds the potential for improvements in games based on its superior programmability.
Are you a game developer or a fan? Your opinion means nothing to me.

If the NV30 needs extra power to run at optimum performance, then it will get it.
How do you know? Another fan word?

And, btw, don't forget that we're not talking about the same achitecture. The NV30 was designed for the .13 micron process.
Maybe the R300 has a more compact design than NV30, maybe not.
 
Chalnoth said:
You don't think there will be any benefit to the new pixel/vertex shaders in the NV30? I'd say that's highly presumptuous.

On the flip side, it's equaly presumptuous to assume that there will be any benefit...

Just revisit the situation with NV HOS implementation (spline-based)... sure it's great tech but then they had to disable it in their drivers, making it useless.

Nevertheless, I'm looking forward to both releases (R300 and NV30).
 
tamattack said:
On the flip side, it's equaly presumptuous to assume that there will be any benefit...

Well, most features implemented in hardware have at least some benefit, usually within a few months of the release of the hardware.

Just revisit the situation with NV HOS implementation (spline-based)... sure it's great tech but then they had to disable it in their drivers, making it useless.

To be fair, it was only disabled in Direct3D.

And yes, I certainly agree that there is a possibility we won't see these features used in games anytime soon. But, given the past, few features follow this pattern.
 
Why argue without choosing sides? You'll just end up arguing with yourself.

As long as an individual doesn't argue with the sole purpose to choose sides it's fine by me.

There's nothing wrong to "argue" with yourself (metaphorically); holding up the mirror trims egos and builds character.

Excuse the interruption carry on...
 
I'd rather not argue with myself in public, is what I meant. Could get boring...mostly for others. But yeah, I think I'm done with this topic.
 
Chalnoth said:
multigl2 said:
why chose sides?

Why argue without choosing sides? You'll just end up arguing with yourself.
No, choosing sides blinds you to the truth.
To assume one side will win, without knowing everything, means you make assumptions.
This blinds you to certain possibilities.
This is your most ludicrous post yet, Chalnoth, and is why i dont trust any information in any of your posts.
You also say you like arguing. Well, you arent arguing. You are pedantically supporting a particular IHV - there can be no argument without an open mind.
 
Althornin said:
Chalnoth said:
multigl2 said:
why chose sides?

Why argue without choosing sides? You'll just end up arguing with yourself.
No, choosing sides blinds you to the truth.
To assume one side will win, without knowing everything, means you make assumptions.
This blinds you to certain possibilities.
This is your most ludicrous post yet, Chalnoth, and is why i dont trust any information in any of your posts.
You also say you like arguing. Well, you arent arguing. You are pedantically supporting a particular IHV - there can be no argument without an open mind.

agreed agreed agreed
 
Just a few questions to those "in the know":

is it feasible for nv30 to be an 8 pixel-pipe implementation? I know that most rumored specs point out to that, but with all the added complexity to their vertex & pixel shaders in comparison to r300 (lots more registers than r300 & addional ops) an 8*2 design is probably not possible with ~9% higher transistor count (assuming 110m(r300) and 120m(nv30)). Could it be that nv30 is going to be a more "nVidia traditional" 4*2 architecture (I remember to have read somewhere that nv30s problem(missing a better word here, do not want to imply that it really has one) in regards to speed was not so much with the clock-rate it will attain, but rather with how much it can do per clock cycle (don't remember where i red it). Could that be the reason why they implemented that fp16 color format, to be competetive with ati ("fillrate wise", then processing 8 pixels/clock).

how does nv18 fit in with nv31 (i assume it to be the value part) taking mainstream and nv28 being the dx8 (value) part. if it really is dx8 capable, why bother with nv28 (just to fill oems' 8xagp checkbox???).

Won't nv30's higher "close to real-time" (again, for the lack of a better expression) rendering capabillitys be quite important for end-users in regards to Longhorn (being able to run larger shaders on windows)?
 
Hi alexsok,
alexsok said:
NV30 will have 16 pipelines - that's double the ammount of the pipelines on R300!
Careful--the paper just mentions "16 texture units," not pipelines. It could just as well be speaking of virtual TMUs, not physical ones. Or a 8x2 design. Or 4x4. Or . . .

Here's waiting for the NV3x paper launch . . .

ta,
-Sascha.rb
 
Back
Top