General Company Performance ATi/Nvidia

Dreamweaver

Newcomer
I'm not talking here from a financial point of view, more a development/foresight/initiative point of view. I was wondering what you guys thought of this?

I think since September 2002, Nvidia has been playing catch up with ATi, I think the 9700 pro caught them out a bit, dare I say 3DFx resting on their laurels kind of way! Lets face it that NV30 launch with that stupid hoover straped onto the side of the card was a desperate release for any company, never mind the market leader.

I'm often amazed at the amount of knowledge within this site so please don't pan me too much here but... I've heard you guys talking about Nvidia moving towards a more parallel architecture and that they have followed ATi's lead and moved to this brute force approach I think I have heard it described as, do you think that while Nvidia has followed down this path so desperate to catch up with ATi that ATi has been able to concentrate on different aspects of performance, like compression or maybe some other way of accelerating 3D in general, do you feel that ATi may have something MORE up it's sleave in the near future? And please I'm not talking about R420, just in general do you think ATi may have been able to concentrate more on other aspects of 3D architecture than Nvidia? or do you think I am selling Nvidia short here?

If you feel I haven't got a clue what I am on about, you're quite right :), it's just an aspect of development for these two giants that I am curious about and I've been giving it some thought

J.
 
Given that this is a fairly general topic, I'll be general too :)

ATi have been improving for quite a while now, with the 8500 a key part for them in terms of equalling the kind of things NVIDIA were doing. The drivers were a real let down at first, sure, but the hardware was certainly pretty solid. However, as you point out, the 9700 was really the big one. It was clearly a superbly designed part, but that's not to say NVIDIA were really "resting on their laurels" when they released the underachieving 5800. They made some wrong decisions 2-3 years previous to it's release, and it caught up with them. Plenty of other threads around here have discussed what happened, so I won't delve into it here.

Going forward, I certainly don't see ATi getting further and further ahead (or NV for that matter). Indeed, I expect things to be (from a distance) fairly parallel between the two, with possibly a third major player entering the fray at some point in the next few years. ATi will certainly do very well for the next few years as they ride on the success of their R3xx parts, and NVIDIA are definitely going to have to get used to being slightly behind for now. This generation is almost certainly ATi's again, for example.

The important battles will come with the x5xy and x6xy parts from both firms. ATi have got the XBox2 and Nintendo contracts coming up, and so they'll obviously have to do some juggling and management there (which they've already done to a large extent). NVIDIA will have their first post-R300 design in the NV50, which may or may not be the "big" architectural jump you mention in your post. They've certainly put a lot of cash into that architecture, but I suspect their hand may be forced slightly by what ATi do with R500 (the release schedule being the most important). They don't have all that much maneuvering room, but there's some there.

Both firms will have to overcome the move to 0.9 Low-K at some point in the reasonably near future, which will probably be the first opportunity we have to see whether NVIDIA will continue to be aggressive about their process targetting, or whether they've taken a leaf out of ATi's book and been more cautious. Recent decisions certainly seem to eer towards the latter, but time will tell. If anything, it looks like ATi are being more aggressive right now, which speaks volumes about the confidence they have at the moment.

Crikey, long post. Anyway, exciting times ahead for both firms :)
 
PaulS said:
Both firms will have to overcome the move to 0.9 Low-K at some point in the reasonably near future....

This, above all IMO, will be very interesting to watch. I mean, it seems pretty much everyone is having a hard time trying to get decent results on 90nm processes. Beyond the architectures (NV50 and R500?) themselves, It'll be interesting to see who fares better (and what fabs they use), when 90nm chips are due.
 
I think several things happen with NV30 with NVidia.. Primary they got too cauky because of there leadship and attempted to make some thing a head of of its time.

As for as post R300, the NV40 is post R300 in technology because its SM 3.0 supports - it seams to me that a lot of people do not give that much credit. But I think in long run its will prove differently.

As for NV50 / R500, its anybodies guess.
 
Joe DeFuria said:
I mean, it seems pretty much everyone is having a hard time trying to get decent results on 90nm processes.

I wasn't aware Intel were having any problems with the process - any links? Although I guess maybe you weren't including them, since they use their own fabs. On a similar note, it will be interesting what results AMD get, since they've just started production on their 90nm parts for Q3.
 
Joe DeFuria said:
The lateness and power consumption of Prescott?

I think your comment is too broad as there are companies which have been having success with 90nm for months, these being IBM, AMD, Sony/Toshiba that I know of. Again, there is a difference between a process technology and it's vendor and IC specific implimentation which should be noted in this questioning.

I do agree and would be more concerned about companies like TSMC and UMC's ability to ramp though, which is where alot of rhetoric is emerging from. Atleast in comparason to IBM/AMD.
 
Vince said:
I think your comment is too broad as there are companies which have been having success with 90nm for months...

With large chips?

I was under the impression that (generally speaking) 90nm was proving to be more of a challenge than past migrations. I really don't have much concrete to offer as "evidence", other than just the impression I've gotten from various blurbs and following the industry.
 
Posted by: Chris Tom on Monday, April 26, 2004 - 11:24 AM
EE Design has a look at 90nm designs hitting problems with bad signals. Now AMD and Intel are not mentioned, but IBM is in the article.
Raminderpal Singh, senior engineering manager at IBM Corp., agreed that signal integrity has become a significant and surprising problem at 90 nm. Part of his job is helping 90-nm customer designs get through IBM's fabs.

"As people push the density, and push the frequency, and voltage goes down, you just have a lot more happening and a lot less to live with," he said. "A whole series of effects becomes very real."

Singh said that the primary problem he sees is not so much crosstalk, but power distribution noise. "I'm not saying [crosstalk] is not happening; I'm just not seeing it as dominant as power distribution issues," he said. "Maybe it's because people can design away from crosstalk. What I hear about is the issues they can't fix."


well it dosent say straight out that intel or amd is haveing probles with the 0.9nm but IBM surely is
 
Rican, give me a break. You just posted a comment that goes against what I stated. IBM itself isn't having a problem, it's been producing Power4 derivatives, 970FX's, on the 90nm node for a few months AFAIK.

You posted a comment about IBM's fab buisness, so of course there are going to be more problems when some fabless vendor comes in and wants to create a 90nm part using the same techniques and logic constructs as they did since 180nm or 130nm.

Again, process technology verse implimentation. Sony has been in mass production of a 90nm part since late 2003; Toshiba's TC300 family of SoC has been aswell (both are eDRAM parts to boot). IBM's producing 970FX's which Apple is selling in their G5 Xserve's, AMD is ramping on their Hammer family. It's being done, but you can't expect to just get something for nothing anymore.
 
Oh there are serious retardants on 90nm.

The biggest is the dielectric issue. You can't really expect to get fast 90nm parts without using a lowK dielectric. Everyone is having lowK problems (see this week's EE Times for example). So you have a choice, either build in 90nm with lowK and get speed & density, or build in 90nm with FSG and just get density. Intel, IBM, TSMC all have problems with this.

There is some debate about Sony's 90nm claim, and test chips are not the same as production for use in graphics chips. A lot of early chips in any process are SRAMS and FPGA/PLD type products, all of which are easier to build than a graphics chip. Even a processor is easier to build than a graphics chip.

Quite frankly I would not be surprised to see mainstream use of 90nm in graphics until late 2005 or early 2006. Sure nV or ATI might do a high-end part for early 2005, but it would not be a part they had to produce millions per month of. The risk would be too high and the economics too sucky.
 
Vince said:
IBM itself isn't having a problem, it's been producing Power4 derivatives, 970FX's, on the 90nm node for a few months AFAIK.
....

IBM's producing 970FX's which Apple is selling in their G5 Xserve's.

Yet they don't hit any higher clockspeeds. Only a power consumption advantage.

Not that Apple isn't looking forward for a 3GHz G5
 
I have actually spoken to process engineers about this, and their comments have been the same since the introduction of the 130 nm node: the jump to 90 nm will be a significantly more expensive one in both time and money.

Intel doesn't release yield numbers, but with 4 fabs running at 90 nm and 300 mm wafers, Intel is having a hard time providing the market with Prescott based processors. And in two of those fabs, all they do is run Prescott lines (to the best of my knowledge).

As we move forward, more information is coming out about the problems that we are seeing. The first major problem was leakage, and that continues to be a problem. Intel has tried to to help this by producing chips made on strained silicon (some areas of the silicon are compressed, while other parts of the silicon are "expanded" depending on what kind of properties the transistors require). This is certainly not an easy method, but it does work. Using Low-k would also help, but so does SOI. Now we are seeing hints at the signal integrity and power distribution. Logically there is probably much more going on here than they are listing also.

IBM may be making Power 4 derivatives, but from what Apple is saying, they are not making nearly enough of them. Plus, the Power4 is significantly smaller than the Intel Prescott (not to mention the AMD Athlon 64).

Also, saying that making a GPU is more complex than a CPU is a stretch. Sure, a GPU now has many more transistors than a CPU, but it is certainly not as complex! If that were the case, then why isn't ATI and NVIDIA putting out CPU's that outpace what AMD and Intel put out? GPU's have the ability to be massively parallel (now that we are seeing up to 16 functional units on GPU's), while CPU's are more of a single pipe design. It is much easier to program a 3D app using that parallel ability as compared to programming multi-threaded applications for a CPU. Also, both AMD and Intel are using custom cells, which in themselves take up a LOT of design time as compared to using standard cells and just doing trace and route on it (of course I make this sound simplified, as none of these processes are "easy"). How fast is the fastest GPU around so far? 500 MHz? If you think that designing a high speed memory controller that runs at 2.0 GHz and above and communicates with the main memory at 200 MHz is easy, ask AMD how long it took them to develop that portion of their chip. From my understanding, that was the main reason the Athlon 64 had such a hard time getting up to speed.

Definitely a case of apples and oranges here. But you cannot underestimate the complexity of CPU design. For one thing, GPU's don't feature a Branch Prediction unit, and those take up some transistor space. Of course, with SM 3.0 being out, perhaps we will start to see them...?

Back on subject though, I think both ATI and NVIDIA are sitting pretty close to each other in terms of overall performance, with NVIDIA getting the nod here for having a slightly stronger business sense (it takes money to put out more chips). The wins by ATI for the Xbox and Nintendo are huge, but again that takes engineering ability away from their core business. Add to that their foray into the motherboard chipset market and their handheld platforms, they have had to hire quite a few more engineers. NVIDIA went through this crunch shortly after the introduction of the NV2x, and we saw the results with the NV30 (NVIDIA expanded into consoles, motherboards, handhelds during this time). I don't think ATI is dumb, and they have planned around this (especially in seeing how NVIDIA handled it), so the impact should be less severe than what NVIDIA suffered, but it will still impact them as core personal that worked on previous projects are shuffled to new areas where they may not have as much experience. I think 2005 will show us the results of this, and perhaps we will see if this dispersion of ability to other projects will affect the R500.

Time will tell.
 
Looking back at the Nvidia launches of the past. You can see their new cards rarely tear the top end cards of their previous generation apart. It is usually the 2nd generation of the GPU that does the trick.

What was lacking since the days of the TNT2 was a real competitor. ATIs solutions were buggy, slow, and really nothing to look at. 3dFX died a lonely death. And companies like Kryo just couldnt quite get it done.

With the 9700 ATI hit a home run and provided a real competitor to Nvidia. So when the 5800 showed up it didnt look that good. Were they junk cards? I dont think so in the least. But they didnt outperform their competition by the standard 30-50% like they used to. On top of that nvidia missed the boat on AA and probably did themselves in by having a more costly but higher quality AF implementation.

PS2.0 performance ended up being the big thing but after finding out Far Cry is only using PS 2.0 shaders on the lighting effects I think it is safe to say neither the NV3.xx or the R3.xx will be real PS2.0 performers. Hell maybe even the NV4.xx and or R420 wont either. But it appears the game is more CPU limited so Ill wait for final judgement :)

What I fear from a competition standpoint is ATIs apparent slower development cycles. If the R420 comes out and is indeed just a souped up R3.xx then they have basically designed 1 core and modified it over the last 18-24 months. From an economic standpoint that is great but from a feature set this will set you back eventually. Over the same time Nvidia will have basically gone through 3 cores in the NV2.x, NV3.x, and NV4.x.

I guess we wait until May 5th and see what ATI has to offer. My hope is they surprise us with a SM3.0 model that includes full PS3.0 support. But my guess is it will be a souped up R3.xx with a slightly higher clock and more pipelines. :(
 
Maintank said:
What I fear from a competition standpoint is ATIs apparent slower development cycles. If the R420 comes out and is indeed just a souped up R3.xx then they have basically designed 1 core and modified it over the last 18-24 months. From an economic standpoint that is great but from a feature set this will set you back eventually. Over the same time Nvidia will have basically gone through 3 cores in the NV2.x, NV3.x, and NV4.x.

To say that nvdia went through 3 cores, vs. ATI's one is simply not correct. NV2x was on it's last legs when R300 launched, and NV3x was late. Using your logic, I could say ATI went though 3 cores (R100, R200, R300/R420), in the same time nVidia went through 3. (NV20, NV30, NV40), since R200 was launched well after NV20 was.

Anyway, yes, from a feature standpoint, it will set you back eventually. Key being eventually. I'm not concerned that R420 isn't (presumably) a new core...ATI did it that way because they could, since R300 was a very solid foundation. Much like nVidia did the GeForce4 thing, instead of going to a PS 1.4 part, for example.

In short, both ATI and nVidia basically did "what they had to do, and nothing more", which is what we can typically expect. nVidia really had to come up with a new core this gen. Extending the R3xx core vs. NV3x core figt further would be really, really tough on nVidia to do. NV3x's flaws are rather well known.

On the other hand, ATI doesn't "need" a new core at this time...provided that they can show a performance and/or quality advantage over the competition, which of course remains to be seen. That being said, eventually lack of feature support (like PS 3.0) will indeed have to be addressed. The question is "when?" And the answer to that depends on exactlywhat kind of differce PS 3.0 makes, or is perceived to make.
 
Vince said:
Rican, give me a break. You just posted a comment that goes against what I stated. IBM itself isn't having a problem, it's been producing Power4 derivatives, 970FX's, on the 90nm node for a few months AFAIK.

IBM is having problems with the 970FX, both yields and binsplits are bad.
 
To say that nvdia went through 3 cores, vs. ATI's one is simply not correct. NV2x was on it's last legs when R300 launched, and NV3x was late. Using your logic, I could say ATI went though 3 cores (R100, R200, R300/R420), in the same time nVidia went through 3. (NV20, NV30, NV40), since R200 was launched well after NV20 was.


Ok afaik the NV2.x series was the GF4 which was launched in Feb\March of 2002. R200 afaik was the 8500 which was launched in Fall of 2001.
R3.xx came out in August of 2002 and at that point the GF4 was in the channel for maybe 4 months which IMO would hardly put it on its last legs.

I am failing to see how you can put the R100 into this argument which was launched when, like 2000? At that point Nvidia was just getting GF2 out the door.

If the R420 does indeed hold up to be a souped up R3.xx core that means they essentially have gone from August of 2002 until the R500 which will probably launch in the Spring of 2005 using the same core. In the same timeframe nvidia will have gone from the GF4 to the NV50 which are all distinct cores with distinct feature sets. That is the point I am making on the development cycle times.

And that is where I fear ATI will fall behind.


Anyway, yes, from a feature standpoint, it will set you back eventually. Key being eventually. I'm not concerned that R420 isn't (presumably) a new core...ATI did it that way because they could, since R300 was a very solid foundation. Much like nVidia did the GeForce4 thing, instead of going to a PS 1.4 part, for example.

From a feature set pov I am not so sure ATI wasnt already behind the curve when the 5800 came out. But the simple fact nobody is taking advantage of PS2.0 or PS2.0+ on the NV3.xx cards and Nvidia produced a developer card has helped them tremendously. But this generation with the apparent lack of PS3.0 in the R420, the feature set is growing quite large and it may come back and hurt them.


In short, both ATI and nVidia basically did "what they had to do, and nothing more", which is what we can typically expect. nVidia really had to come up with a new core this gen. Extending the R3xx core vs. NV3x core figt further would be really, really tough on nVidia to do. NV3x's flaws are rather well known.


But the simple fact I am pointing out is Nvidia is coming out with new cores with extended feature sets on a 12 month basis where it looks like it could be upwards of 36 months for ATI. That is huge and it could hurt them.

On the other hand, ATI doesn't "need" a new core at this time...provided that they can show a performance and/or quality advantage over the competition, which of course remains to be seen. That being said, eventually lack of feature support (like PS 3.0) will indeed have to be addressed. The question is "when?" And the answer to that depends on exactlywhat kind of differce PS 3.0 makes, or is perceived to make.


I think it depends on what developers do with it. My personal opinion on the total lack of PS2.0 titles in the channel is the developers saw PS3.0 possibilities from a developers POV and they were basically waiting. In the 18 months since we have had PS2.0 compliant cards out we have what, 2-3 titles that make use of any form of PS2.0? Yet by the end of the year we could see upwards of 10-12 PS3.0 titles?

Obviously if the developers dont take hold of PS3.0 my opinion will be wrong and PS3.0 may go into the logs as the PS1.4 of its time. If I am right this could be a trying time for ATI until the R500 comes out. And I fear that time may erode the competition in the market.
 
Back
Top