NV30 Update

Doomtrooper said:
IF you consider speed only as the determing factor for superiority then you have a arguement...but looking at history ATI makes more advanced cards...

Radeon 64 Meg Vivo
AIW
Supports all bump mapping modes
DVD features galore

Geforce 2 GTS has only one mode Dot3


Radeon 8500
PS 1.4
Truform (really a building block for dispacement mapping)
Higher Internal Precision
Supports all bump mapping modes again
DVD features galore

Geforce 3 and 4 supports Ps 1.1-1.3 and both bump mapping modes

Radeon 9000

1st value priced Dx 8.1 card

M9000

1st low power Dx 8.1 mobile chip

R300/9700
The most advanced chip/card ever made..

If you look at the trend here ATI has been releasing more advanced hardware than Nvidia, mot to mention the mobile market where low power is the key (i.e M9000).

This is not a very objective observation. Each one of those ATI cards that you are comparing to an Nvidia card came out afterwards. Then you conveniently overlook the flipside of the coin. Here's the other way of writing history:

Geforce1 v. Rage Fury Pro

GF1 has T&L (later FSAA via driver hack)

Geforce2 GTS v. Rage Fury Maxx

Geforce2 has T&L, dot3 bump mapping, FSAA

Geforce3 v. Radeon 64 meg Vivo

GF3 has pixel and vertex shaders

Nvidia: the first 32 bit card (TNT), the first T&L, the first pixel and vertex shaders, the first FSAA, the first AF.

Gee look, Nvidia has always produced more "advanced" cards than ATI. :rolleyes:

The only point in time where your observation is defensible is with the GF4 vs. the 8500, since the GF4 was really just a speed bump as compared to the GF3. There you have a bona fide situation where a card that was released later (the GF4) has fewer features than a card released earlier (the 8500). Compensating for that is a fairly sizeable performance delta however.

Let's not get TOO fanATIcal here...
 
Joe DeFuria said:
OK. I'll bite.

I'm curious is what way is the GeForce3 "technically more advanced" than the 8500? (What selective features would you choose to show the GeForce3 more advanced?)

Right back at you :)

- Multisample antialiasing
- Anisotropic filtering implementation without issues at certain polygon angles, and one that works with trilinear filtering
- Per-pixel, range-based mipmap level selection

Granted, R8500 is technically a more advanced chip than the GF3, but the latter has it's strong sides as well.
 
Just some minor nitpicking corrections,
woolfe99 said:
Geforce1 v. Rage Fury Pro

GF1 has T&L (later FSAA via driver hack)

Geforce2 GTS v. Rage Fury Maxx

Geforce2 has T&L, dot3 bump mapping, FSAA
You probably already knew this, but for the record, Geforce256 had identical features to the GF2/GTS. Cube mapping, bump mapping and per-pixel shading were promoted by nVidia heavily during GeFroce2 launch to make it appear to have more features.

Nvidia: the first 32 bit card (TNT)
I'm certain others got to the market first. AFAIK at least Matrox G200 and PowerVR PCX2 supported 32bit rendering. Though one could claim TNT was the first to offer it at somewhat playable frame rates.
 
jpprod said:
You probably already knew this, but for the record, Geforce256 had identical features to the GF2/GTS. Cube mapping, bump mapping and per-pixel shading were promoted by nVidia heavily during GeFroce2 launch to make it appear to have more features.

Actually, it's a little bit more than that...more features were actually exposed in the new drivers released at the time of the GF2 launch. Specifically, register combiners. Of course, they worked just fine on the original GF hardware, too.
 
woolfe99 said:
The only point in time where your observation is defensible is with the GF4 vs. the 8500, since the GF4 was really just a speed bump as compared to the GF3. There you have a bona fide situation where a card that was released later (the GF4) has fewer features than a card released earlier (the 8500). Compensating for that is a fairly sizeable performance delta however.

Let's not get TOO fanATIcal here...

Nothing fanatical about it, and your analogy is wrong here big time...

Radeon 1 vs. Geforce 2, Geforce 2 GTS, Geforce2 Ultra, Geforce 3 :rolleyes:

Radeon 8500 vs. Geforce 3 Ti and Geforce 4 Ti


The only card that was more advanced than a Radeon 1 was the Geforce 3. while the 8500 is more advanced than the other two competitors the Titanium series.

Lets get the facts straight.
 
Another hurdle I see Nvidia will have to face after the .13 growing pains is DDRII. That too is new technology which also means a higher price initialy paid. So Nividia is faced with a new process with lower then expected yields initially and unproven maketability of new type of ram which isn't available or should I say used yet at a higher price. Now if the NV30 has just taped out and it is indeed a radically new design from their previous designs then drivers do become a concern especially if real hardware isn't available to really test and debug the new code. For awhile now I predicted that the NV30 was a 2003 product which should be readily available in Feburary of next year. Now I am thinking that date itself will slip into March. Just some thoughts, thats all.
 
noko said:
So Nividia is faced with a new process with lower then expected yields initially and unproven maketability of new type of ram which isn't available or should I say used yet at a higher price.

"lower than expected yields initially" is not yet known as full production has not yet begun. I see the main problem nVidia may be facing right now is that due to the current low volume of TSMC's .13 micron process, they won't have any significant economies of scale to leverage yet.
 
woolfe99 said:
This is not a very objective observation. Each one of those ATI cards that you are comparing to an Nvidia card came out afterwards. Then you conveniently overlook the flipside of the coin. Here's the other way of writing history:

Geforce1 v. Rage Fury Pro

GF1 has T&L (later FSAA via driver hack)

Geforce2 GTS v. Rage Fury Maxx

Geforce2 has T&L, dot3 bump mapping, FSAA

Geforce3 v. Radeon 64 meg Vivo

GF3 has pixel and vertex shaders

Nvidia: the first 32 bit card (TNT), the first T&L, the first pixel and vertex shaders, the first FSAA, the first AF.

Gee look, Nvidia has always produced more "advanced" cards than ATI. :rolleyes:

The only point in time where your observation is defensible is with the GF4 vs. the 8500, since the GF4 was really just a speed bump as compared to the GF3. There you have a bona fide situation where a card that was released later (the GF4) has fewer features than a card released earlier (the 8500). Compensating for that is a fairly sizeable performance delta however.

Let's not get TOO fanATIcal here...

Ok so by your way of doing things we can compare R9700 vs GF4 Ti4600 and say it's more advanced (even after NV30 is released)? He's comparing the same generations of cards, which does somewhat make sense.

The R9700 release more or less invalidates your point (although I realize you were just using the listing as an example). The point is Radeon was ATi's answer to Geforce 2. Rage Fury was made to combat TnT. Yes, they were late, but the doesn't mean they were not meant to compete with them. And, that all changed (fortunately) with the R9700... more competition is good. :)

But now that the R9700 is out and ATi's had its day in the sun, I really wish the NV30 was released. NV30 just taping out now is just really bad news for us consumers. :(
 
Lets get the facts straight.
The fact is that, until the R300, ATi has always released their competing-generation cards much later than nV. They turned the tables with the R300--it's a big triumph for ATi, especially if nV is cutting the NV30 down just to get it out the door.

So is this over-design/under-engineer thing a 3dfx curse, or just eerie coincidence? ;)

PS - China's #1 OEM PC (nV HK PDF) looks sweet. :)
 
especially if nV is cutting the NV30 down just to get it out the door.

Actually, I don't seem them cutting anything signifcant from NV30, as the only thing that seemed to be cut was the Per Primitive Processor and it's not exposed in DX9 anyway, so it's not very useful for now...
 
There was never any confirmation or factual information that the primitive processor existed at all. It was idle speculation because a paper on real-time raytracing from Stanford happened to have a co-author who is now an NVidia employee. This paper speculated that future or near term hardware would benefit greatly from a primitive processor for stream-mapped ray-tracing. This was extrapolated to mean that the NV30 must have this. However, this paper also speculates that a deferred rendering tiler would boost efficiency of the algorithm as well. Does this also imply that the NV30 is a tiler? Perhaps the primitive processor and/or tiler speculation is real, but it does not logically follow that this has anything to do with the NV30. It could mean it is in the *design stage* ready to be delivered in 12-18 months on the NV30.
 
DemoCoder said:
There was never any confirmation or factual information that the primitive processor existed at all. It was idle speculation because a paper on real-time raytracing from Stanford happened to have a co-author who is now an NVidia employee. This paper speculated that future or near term hardware would benefit greatly from a primitive processor for stream-mapped ray-tracing. This was extrapolated to mean that the NV30 must have this. However, this paper also speculates that a deferred rendering tiler would boost efficiency of the algorithm as well. Does this also imply that the NV30 is a tiler? Perhaps the primitive processor and/or tiler speculation is real, but it does not logically follow that this has anything to do with the NV30. It could mean it is in the *design stage* ready to be delivered in 12-18 months on the NV30.

Actually, the per primitive processor myth came from nVidia's "What comes after 4" paper and not the Stanford paper (don't remember it mentioned there at all), where one of their employees (not working there anymore) outlined the main advantages the next-gen of GPU's will have over the previous generation and the per primitive processor marked one of these advantages.
 
But now that the R9700 is out and ATi's had its day in the sun, I really wish the NV30 was released. NV30 just taping out now is just really bad news for us consumers.

To put that in context - 9700 taped out in February, 7 months ago!
 
alexsok said:
Actually, the per primitive processor myth came from nVidia's "What comes after 4" paper and not the Stanford paper (don't remember it mentioned there at all), where one of their employees (not working there anymore) outlined the main advantages the next-gen of GPU's will have over the previous generation and the per primitive processor marked one of these advantages.

It was definately mentioned in the Stanford paper. But yes, Richard Huddy's presentation (what comes after 4) also mentions it, in addition, that presentation also mentions: tiling, embedded dram, subdivision surfaces, and a few other radical things. But I view this presentation as more of a "wish list" for the future rather than an enumeration of NV30 features.


NVidia probably consised the primitive processor for the NV30 among other features, and it simply didn't fit into their time or transistor budgets, so it was pushed to the future. I highly doubt however, that they had fully designed say, a 130m transistor part with these extra features, and then went back and ripped them out.

I bet they were shelved quite early in the functional requirements analysis part of the development process.

At the company I work for now, the list of wanted and considered features far surpasses the length of the actual features that make it into the final product. We start out with everyone's list of requirements, and then as we get a better handle on the architecture and our timelines for release, we scale back in several revisions to only the "MUST HAVE" features. This is usually done well before any code is written.


I think the idea that NVidia added or ripped out anything from the design late in the process is basically ridiculous. More like, when the original NV30 functional requirements were written, several engineering groups within NVidia added features they would like, but in the end, many of them went on the chopping block before a final spec was hammered out.
 
More like, when the original NV30 functional requirements were written, several engineering groups within NVidia added features they would like, but in the end, many of them went on the chopping block before a final spec was hammered out.

Well, the 'party line' on the number of transistors it has seems to have changed from 120 to 100 judging by the recent investor notes, and they likely would have gone someway down the design route to have those numbers.

I've also been told that NV30 has taped out several times already but a number of the initial tapeouts came back non-repsoncive; in these cases NV didn't class them as a 'tapeout' (which is why the CEO said it had at one point but not the next).
 
DaveBaumann said:
But now that the R9700 is out and ATi's had its day in the sun, I really wish the NV30 was released. NV30 just taping out now is just really bad news for us consumers.

To put that in context - 9700 taped out in February, 7 months ago!

Yes, ATI and nVidia seems to tackle the design a bit process differently!

Just to repeat their Tapeout to Volume:

RIVA TNT2 A02 98 days
GeForce 256 A03 104 days
GeForce2/GTS A03 102 days
GeForce/MX A01 110 days
GeForce3 A03 118 days
XGPU A03 120 days
GeForce4 A02 101 days

A new generation like GeForce 256, Geforce and XGPU all needed A03 rev but made it from 104 days to 120 days. I would say 4 months which suggest we can a A02 rev preview i November and an extremely limited number for shipping final rev chips in December. (good luck buying one!).

Something more for thought: Although NV30 is more complex than prior generations, nVidia seems to suggest that it is not an order of magnitude more complex...

Take a look on how they compare the different designs themselves:

Relative Algorithmic (ie Design) Complexity: (NV1 = 1x)

NV2 1.5x
RIVA 128 4x

RIVA TNT 7x
RIVA TNT2 10x

GeForce256 20x
GeForce2 22x

GeForce3 30x
XGPU 35x
GeForce4 40x

NextGen 50x

Not such a huge step from GF4 to the NV30 apparently...

But as Chris Malachowsky has stated:

We have gotten good enough at the majority of things that the problems that we have left once we hit silicon are really tough !!
 
DaveBaumann said:
More like, when the original NV30 functional requirements were written, several engineering groups within NVidia added features they would like, but in the end, many of them went on the chopping block before a final spec was hammered out.

I've also been told that NV30 has taped out several times already but a number of the initial tapeouts came back non-repsoncive; in these cases NV didn't class them as a 'tapeout' (which is why the CEO said it had at one point but not the next).

So basically when the R300 taped out 7 months ago, it may have come back non-responsive, but ATi still considers that their tape out date. Meanwhile Nvidia has "taped out" the chip, but they just don't consider non-responsive returns to be "tape outs". Basically it's 100 days, more or less, from the day they finally get something back that works.

That explains why ATi took 6 months from tape out to production, while Nvidia claims only 3 months...it's all just a matter of using different terms. If you consider that the first failed tape out for Nv30 happened about 2 months ago (right?), then that has parity with the "6 months" it took ATi to ramp from tape out.

Anyway, ignoring all that...assuming their 100 day stuff is really legit and not just BS. If Nv30 taped out this week, they could have it ready by mid-December. The bad thing is if it takes them even 110 days to get the thing into production then they'll miss Christmas (and their "fall release"). If it takes them 120 days like GF3 and XGPU (which wouldn't surprise me, since it's a whole new architecture) then that puts them into January.

I'm assuming those numbers are time from tape-out to product on shelves? 'Cause if not, then them making Christmas is looking pretty sketchy. In fact in that case I think there's a strong chance they won't make it.
 
That explains why ATi took 6 months from tape out to production, while Nvidia claims only 3 months...it's all just a matter of using different terms. If you consider that the first failed tape out for Nv30 happened about 2 months ago (right?), then that has parity with the "6 months" it took ATi to ramp from tape out.

I don't think it took ATI 6 months from Tapeout to production either - it was 6 months from tapeout to retail. AFAIK there were no silicon or board changes from those that were previewed to those that are in the channel now. IMO it likely that their volume production actually started at around the time they previewed it. We know a lot of the volume has been going to the OEM's before it went to retail as well. So its probably more like 4-5 months going from tapeout to volume, IMO (if you were to use NV's definitions of 'volume').
 
Hehe, ok. All these different terms make it hard to compare. But the crux of the matter is it seems to take them both about the same amount of time, which is what probably should be expected anyway. I was just trying to clarify that NV30 isn't 6 months off, having taped out now.
 
Hehe, ok. All these different terms make it hard to compare.

Yes, really they are just marketing numbers. However, NV don’t make boards so all they are really interested in is volume chip production because that the point they can start selling to the vendors; retail availability comes sometime after that.

However, my point about pointing out when ATI taped out in relation to NV30’s current-rumoured-tapeout was to illustrate where ATI are in terms of development. When R300 was used at the E3 DoomIII demo Carmack made the point that ATI was a cycle ahead of NVIDIA because of NV designing the Xbox parts – at that point it didn’t really seem real because NV30 seemed to be ‘just round the corner’ from R300; but now, with hindsight, it shows JC’s word to be accurate.

If ATI are moving to a 6-9month cycle as they say then they are probably getting geared up for a tapeout of their next part (R350?) at any time now as well, so, there is a real possibility that R300 isn’t NV30’s real competition, but whats after it which could be only a few months behind.
 
Back
Top