Q&A with AMD’s Rick Bergman on the graphics sweet spot

Jawed

Legend
http://venturebeat.com/2008/11/26/q...n-fighting-nvidia-with-a-sweet-spot-strategy/

VB: How long ago did ATI started thinking about its “sweet spot” strategy?

RB: It was three or four years ago. We recognized we could not continue on with huge die sizes. It was before the merger with Advanced Micro Devices. We were still working on a chip called the R600. We were thinking of what we could do for the R700. We decided we couldn’t do another chip that was so big because of power consumption. You can just plot how die sizes have grown generation after generation.

VB: ATI has always aimed to be more aggressive at adopting new manufacturing technologies, which can give a design more performance and other benefits. Nvidia has been more cautious. It argues that you don’t want to try a new design and a new process at the same time.

RB: For the last six process advances at Taiwan Semiconductor Manufacturing Co., we have been first. I don’t understand Nvidia’s thinking there. But when a new process comes along, we jump on it. There are risks to that. We have our moments. When you switch to a new process, a lot of things can go wrong. When you don’t switch and stay with an existing process, there are a lot of tools that help you out. Once you have proven a process node, you are sure your subsequent chips will work fine. We have made it through issues with new processes and gotten through them just fine.
Hmm.

Jawed
 
a) they haven't been the first, in the last 6, nV has had smaller chips come out on a few process nodes before ATi, possible they started designing first, but first to market is quite different.

b) that reads like marketing BS.
 
a) they haven't been the first, in the last 6, nV has had smaller chips come out on a few process nodes before ATi, possible they started designing first, but first to market is quite different.

b) that reads like marketing BS.

Uhm? I'm quite positive that at least 80, 65 and 55nm went all to ATI first?
 
Weren't they first to 90nm by a few months with Xenos, too? (360 came out in '05, G71 in '06.) I'm under the impression that ATI has led the process race (performance/features are a different question) for a few cycles, basically since NV30.

I don't know how revisionist his answers were, but I'm curious if ATI "has always aimed to be more aggressive at adopting new manufacturing technologies." Was this the case back with Rage128? R100? In the context of "always," it's hard to stomach that "NV has been more cautious," given that they were pushing process when 3dfx was still around. It seemed like Rick side-stepped the potentially revisionist aspect of GPU history implicit in Dean's question by only acknowledging the last six new half-/nodes.
 
Well I think NVs cautious stance wrt new process nodes goes back to the huge issues they had with NV30 on 130nm. ATI was able to release R300 with a good six month lead because they stuck with 150nm. I think NV30 was more bad luck than anything though, ATI has been very aggressive with new process nodes since and have not run into problems comparable to those NV faced with NV30.
 
It seemed like Rick side-stepped the potentially revisionist aspect of GPU history implicit in Dean's question by only acknowledging the last six new half-/nodes.
Bear in mind that Rick started at ATI in 2001.
 
Freak'n Panda, now it's my turn to misremember, but weren't R520's and R600's delays comparable to, if not as extreme as, NV30's?

Dave, duly borne! But while that may raise my opinion of the propriety of the answer, it don't help its craftiness none. ;)
 
Has ATi's process node "advantage" helped them any? R520 and R600 didn't do that great in comparison to the guys in green.
 
R520 was a bit smaller than G70 and more future-proof. R600 wasn't the first 80nm ATI chip, that was RV570, indeed a very successful chip. RV630 and RV610 were the first on 65nm. They were slower than G84 and G86, but probably cheaper to make.
 
The last six advances being as follows?

55nm - RV670 (successful)
65nm - RV610/630 (successful)
80nm - RV570 (successful) / R600 (not successful)
90nm - R520 (successful, delay not due to process) / Xenos (successful)
110nm - RV410 (not successful)
130nm - NV30 (not successful)
(half-nodes included, my assertions in parentheses)


So overall I'd say, it seems to pay off to adopt new process quickly - and the track records seem to be improving lately.


Has ATi's process node "advantage" helped them any? R520 and R600 didn't do that great in comparison to the guys in green.

They would not have done better in older process technology, only more expensive to make and sell. :)
 
So ATi targeting the latest process technology had nothing to do with the delays? It just seems that the company pushing the process technology has also been the one hit with delays.
 
Depends. R520 was not delayed due to the manufacturing process (which Xenos has been using already). However, I think RV570 was delayed due to TSMC being unable to provide the 80nm process as originally planned. As for R600... we know for a fact that ATI had the chips done and manufactured in January or February 2007. Some say they put the release on hold for TSMC to improve yields and leakage characteristics, so that R600 can beat G80. Others, that ATI tried to do a respin, but in the end decided to launch the first revision (that they had ready in Jan/Feb).
 
Let's just assume for a second that ATI didn't chose the above strategy and aimed for a single high end chip to beat the GT200. Even if it would end up with roughly 100mm^2 more than a RV770 under 55nm, I doubt that it wouldn't have had a significant advantage in manufacturing cost, headroom for a better price/performance ratio than GT200 and last but not least lower power consumption than a 4870X2.

I'd think anyone with common sense can see that AMD has an absolute winner in its hands with RV770; personally I'm not just not yet convinced by any multi-chip/GPU configuration.
 
Dave, duly borne! But while that may raise my opinion of the propriety of the answer, it don't help its craftiness none. ;)

I don't think it has much to do with craftiness, but more about speaking from experience.

The last six advances being as follows?

130nm - NV30 (not successful)

I think RV350 (130nm) actually made it to market before NV30, although NVIDIA had demos and samples before.
 
Let's just assume for a second that ATI didn't chose the above strategy and aimed for a single high end chip to beat the GT200. Even if it would end up with roughly 100mm^2 more than a RV770 under 55nm, I doubt that it wouldn't have had a significant advantage in manufacturing cost, headroom for a better price/performance ratio than GT200 and last but not least lower power consumption than a 4870X2.

I'd think anyone with common sense can see that AMD has an absolute winner in its hands with RV770; personally I'm not just not yet convinced by any multi-chip/GPU configuration.

ATi already had 55nm parts though before RV770, and large parts at that with the HD3870 and HD3850. Which leads me to omit RV770 from their usual process pushing style. GT200 vs RV770 in that regard just seems to be more with Nvidia being rather slow than ATi being particularly fast.
 
I don't think it has much to do with craftiness, but more about speaking from experience.
I guess a wink can't cover up my poor grammar. By 'help' I meant 'add,' as in your clarification mooted any deft sidestepping I could have inferred had Rick been with ATI longer.
 
ATi already had 55nm parts though before RV770, and large parts at that with the HD3870 and HD3850. Which leads me to omit RV770 from their usual process pushing style. GT200 vs RV770 in that regard just seems to be more with Nvidia being rather slow than ATi being particularly fast.

I don't disagree; my point was though that there wasn't anything that could save the day for NV. Even if in theory GT200 would had come initially at 55nm and RV770 being (in theory again) a high end single chip solution also at 55nm, it still sounds to me that AMD would have sustained a good die size, price/performance etc. advantage.

Always in reply to the follow sentence of the interview:
We decided we couldn’t do another chip that was so big because of power consumption.

In other words they could have done a bigger single chip with RV770 or "R700" as he calls it (unless he means something else with the latter) w/o driving power consumption out of the ceiling.
 
Let's just assume for a second that ATI didn't chose the above strategy and aimed for a single high end chip to beat the GT200. Even if it would end up with roughly 100mm^2 more than a RV770 under 55nm, I doubt that it wouldn't have had a significant advantage in manufacturing cost, headroom for a better price/performance ratio than GT200 and last but not least lower power consumption than a 4870X2.
You're absolutely right, and to be honest I think ATI's reasoning for their decision is a bit suspect.

My theory is that they weren't expecting to so drastically improve their perf/mm2 at the beginning of the R7xx design phase. G80 was out, and it was clear to them that R600 was unlikely to be close to beating it so they weren't going to have nearly as good margins with R600 as they wanted. They just played it safe.

RV770 was a homerun, and it would have been moreso if they shot for the high end. However, if a similarly configured GPU was made with RV670 tech it would probably be 100 mm^2 larger just to match the current RV770 performance, and projected GT200 performance (I think ATI expected more, just like we did) would have required an enormous die. Thus it was deemed too risky to compete with NVidia at the high end at the beginning of RV770's design phase. Who knows when they were able to figure out how to increase density so much.

Now ATI should go for the high end since they surprised NVidia and everyone else. I hope they do, because as you've illustrated, such "reasoning" for targetting the sweet spot doesn't work anymore.

ATI's SPs and TMUs have incredible perf/mm2 and consume a lower percentage of die space than they do on NVidia's designs, and I see no reason that they won't scale well to 40nm. It's clear to me that ATI's perf level will rely on ROP count and setup speed. I hope they made high-end design choices there, like 2 ROP quads per 64-bit memory channel and multiple triangles per clock setup.
 
ATI's SPs and TMUs have incredible perf/mm2 and consume a lower percentage of die space than they do on NVidia's designs, and I see no reason that they won't scale well to 40nm. It's clear to me that ATI's perf level will rely on ROP count and setup speed. I hope they made high-end design choices there, like 2 ROP quads per 64-bit memory channel and multiple triangles per clock setup.

2 quads per memory channel is a good idea with GDDR5, but not GDDR3. This means they'd need to have a top-to-bottom GDDR5 lineup to maintain performance scaling among models, or suffer great disparity with lower-end models that would still use GDDR3.

Multiple triangle/clock setup rate would be nice, but I don't think anyone's cracked that egg yet...
 
Back
Top