Official: ATI in XBox Next

Status
Not open for further replies.
Fox5 said:
So how about if Microsoft gets Nvidia to fab ATi's chips?

Anyhow, Nvidia may have the superior technology, but I believe it is also currently larger, hotter, louder, and maybe even drains more power, and microsoft wants the xbox 2 not to be high end pc like in appearence.

:? Nvidia doesn't have any fabs last I heard.

Anyhow, I'll let Vince carry on my debate since debating Joe is such a royal pain in the ass.
 
Joe and Vince. Demalion and Russ. With bickering like this it seems that same sex marriages are already wide spread. ;)
 
nonamer said:
Fox5 said:
So how about if Microsoft gets Nvidia to fab ATi's chips?

Anyhow, Nvidia may have the superior technology, but I believe it is also currently larger, hotter, louder, and maybe even drains more power, and microsoft wants the xbox 2 not to be high end pc like in appearence.

:? Nvidia doesn't have any fabs last I heard.

Anyhow, I'll let Vince carry on my debate since debating Joe is such a royal pain in the ass.

Woah, could someone clue me in on what the argument is about then, because I thought it was about ATi's inability to deliver on a cutting edge fab process and how a 3rd party wouldn't be good enough, so how would nvidia do any better?
 
Fox5 said:
nonamer said:
Fox5 said:
So how about if Microsoft gets Nvidia to fab ATi's chips?

Anyhow, Nvidia may have the superior technology, but I believe it is also currently larger, hotter, louder, and maybe even drains more power, and microsoft wants the xbox 2 not to be high end pc like in appearence.

:? Nvidia doesn't have any fabs last I heard.

Anyhow, I'll let Vince carry on my debate since debating Joe is such a royal pain in the ass.

Woah, could someone clue me in on what the argument is about then, because I thought it was about ATi's inability to deliver on a cutting edge fab process and how a 3rd party wouldn't be good enough, so how would nvidia do any better?

You talked as if ATI and Nvidia make their own chips internally, which isn't case because they both outsource the manufacturing of them to TSMC/UMC and soon IBM with Nvidia.

However, Nvidia is more aggressive at fab process than ATi and I believe the debate is over the benefits and risks of being more aggressive.

Frankly this whole debate is totally off-track so perhaps you should look elsewhere for information.
 
Joe DeFuria said:
TNT: Suppossed to ship on a more advanced process at 125 Mhz. Actually shipped on a less advanced process at 90 Mhz. (I think it was 0.35u and 0.25u, but don't remember exactly).

GeForce1: Successful introduction of new core...but on old and proven 0.25u lithography. Adavnced lithogragy (0.18) not utilized until GeForce2 GTS respin.

GeForce3: By all accounts: late. 0.15 wasn't "ready" as early as they thought it would be. 0.18u GeForce2 Ultra ships instead. Related: X-Box chip, target was 300 Mhz....shipped at what 225 Mhz?

NV30: You know the story.

TNT - 350nm
TNT2 - 250nm
NV10 - 220nm
NV15 - 180nm
NV20 - 150nm
NV25 - 150nm
NV30 - 130nm

GeForce3: By all accounts: late. 0.15 wasn't "ready" as early as they thought it would be. 0.18u GeForce2 Ultra ships instead. Related: X-Box chip, target was 300 Mhz....shipped at what 225 Mhz?

If you can call the NV20 late after it's inclusion in the XBox, then the R400 is late as well. Both chips were mutated into console derivatives and both had some delays (although R400 is much larger in magnitude).

But, why think rationally about it when you can just spin it to your liking?

What you see as nVidia being "aggressive", I see as being a failure to reasonably predict the state of the advanced process. Where you see nVidia being "aggressive" with lithograpghy, I see it as an over reliance on advanced lithography, the biggest failure of course being NV30

I see aggressiveness above. nVidia surpassed all competitors with respect to lithography (perhaps the TNT was an exception) - I'd hardly call this a fluke.

Again, what you see as nVidia's constant "over-reliance" - others see as "consistency". I mean, how do you overrely on something as fuindimental as lithography to chipmaking? Does Intel overrely too? I can just see the AMD fanb0ys nodding in agreement as everyone else laugh.


As Per NV30, I've already stated that nVidia fumbled and ATI picked up the ball and ran with it. But where was ATI when nVidia doesn't stumble? Playing second fiddle that's where.... What's going to happen when nVidia gets on track (as they already are?), starts pumping out advaced SOI/low-K chips from IBM and ATI is doing these same old things?

You're spinning a deficiency into a plus - and nobody is going to buy it.

In the same vein, ATI has shown that it does not need the same advanced lithography to compete with nVidia.

Oh what utter bullshit. Go heed your calling and go into politics already.

So, because the R300 does well when nVidia stumbles - you can declare them as able to compete with nVidia? What's happened before nVidia sumbled - ATI got ass kicked. Is ATI a One-Hit Wonder? Can't rule it out...

If the R300-NV30 contest should prove to ANYONE, no, lithograpgphy is NOT everything. It's staring you right in the face for crying out loud.

It also shows that lithography is everything - enter low-K dielectrics. Whatever happened with NV30 it extended well past the manufacturing difficulties - but we still see outward problems in the Low-K situation as well as nVidia's bad yeilds that could stem from designing with them in mind and ultimatly not utilizing them.

Moral of the story: it's not just what process you use, it's how you use it.

Moral of this story: Joe can spin any situation into his biased vision.
 
As Per NV30, I've already stated that nVidia fumbled and ATI picked up the ball and ran with it. But where was ATI when nVidia doesn't stumble? Playing second fiddle that's where.... What's going to happen when nVidia gets on track (as they already are?), starts pumping out advaced SOI/low-K chips from IBM and ATI is doing these same old things?

Vince, ATI didn't "pick up the ball and run with it" they were executing their plan. What your are seeing from ATI at the moment is the first fruits of two years of growth (company purchases) and restructuring - its fairly fortunate for them that NVIDIA chose to ignore TSMC's warnings on the state of the processes they were trying to use, however even if NV30 had been here on time I don't think the outcome would have been vastly different (although it would have been hidden a little longer) - NV3x's pixel shader architecture is quite frankly horrible for DX purposes, whereas R300's is functional, flexible and developer friendly - if you know anything about the architectural difference then you can't deny this. Time to market and the processes used doesn't change this.

For that matter it would be unwise to assume that ATI aren't staying on the bleeding edge - they state they don't like making both major arcitectural changes and using new processes at the same time. So, when they start looking into new processes they will do it on current arcitectures... I wouldn't quite go shooting ones mouth off on what processes are being used just now...
 
Listen either stop with childish bickering and present facts and argue facts instead of shoving dirt on each other or I will lock this thread. Its taht simple
 
DaveBaumann said:
For that matter it would be unwise to assume that ATI aren't staying on the bleeding edge - they state they don't like making both major arcitectural changes and using new processes at the same time. So, when they start looking into new processes they will do it on current arcitectures...

Dave, you should need the new processes to facilitate your new architectures. Anything else is just a linear increase based on existing designs. With the ability to scale forward by a substantial sum kept untill a refresh part - the refresh is still basically tied down the initial architecture. Why would you not want your engineers to target a new process when thinking up a new architecture?

This is exactly what I ment when talking early on about ATI's policy.
 
Dave, you should need the new processes to facilitate your new architectures. Anything else is just a linear increase based on existing designs.

What?

By that criteria you are saying R300 is neither a new architecture over R200 and just a linear increase - clearly a rediculous comment Vince.

Architecturally R300 is more or less completely different to R200. They delivered that nearly doubling the complexity over R200 and increasing clock speed - clearly this proved your statement is complete falacy. They did this by getting a graps of the process technology on their previous part and then pushing that to the limit (or close to, as they are still going further with R350) on their architectural change over.

It's also clear that no other graphics manufacturer has pushed the 150um process to the limit yet - good use of lithography also means nderstanding how far you can go with current processes, something that NVIDIA obviously didn't grasp on the chnageover from NV25 to NV30.
 
Vince said:
DaveBaumann said:
For that matter it would be unwise to assume that ATI aren't staying on the bleeding edge - they state they don't like making both major arcitectural changes and using new processes at the same time. So, when they start looking into new processes they will do it on current arcitectures...

Dave, you should need the new processes to facilitate your new architectures. Anything else is just a linear increase based on existing designs. With the ability to scale forward by a substantial sum kept untill a refresh part - the refresh is still basically tied down the initial architecture. Why would you not want your engineers to target a new process when thinking up a new architecture?

This is exactly what I ment when talking early on about ATI's policy.

Basicly becasue you don't know how it would perform . And unlike other industrys like say designing cars . If a product is late to market it is most likely doomed . With a car if you base the car around a new say hybrid engine and the design of the engine is not working right you can stall buy using a current car . It wouldn't be a big deal.


So if ati went with the .13 micron for the r300 we most likely would not have seen the r9700pro in agust but closer to january. Thus by ati Listening to the planet they got a lead on nvidia who puts to much faith on the newer process to be rdy .


Imho Ms picked ati for some simple reasons

First ati has a very solid part out on the market and most likely will have solid parts for some time to come .

Then ati has experiance designing console chips

Ati also makes very low power chips and very cool running chips compared to nvidia .

Ati also has better power saving features in thier laptop chips than nvidia .

Ati also has better image quality . Which will be important with hdtvs.

Ati also makes mpeg encoders and decoders. They allo have experiance with dv video and other things in thier all in wonder series.

Ati was willing to work out a deal that didn't require microsoft having to buy chips at a fixed price.

Nvidia is having a bad time right now with all the press about thier drivers.

Ati is able to make p4 chipsets using the 800mhz fsb. Which is most telling about thier chip choice .
 
ATI did this with the original radeon refresh as well. The 7500 was really zippy for it's time.

Look at Intel and especially the P4 -- nice and current. Their architecture (philosophy) targets processes that didn't even emerge at the time, but their micro architecture targets the current process.

Who's to say ATI isn't doing much of the same.
 
DaveBaumann said:
Ati is able to make p4 chipsets using the 800mhz fsb. Which is most telling about thier chip choice .

That would have no bearing on MS's choice.

If the are using a p4 wouldn't they want the 800mhz buss instead of say the 533mhz bus. Its my understanding that nvidia doesn't have the right to us that bus speed. So that would add more reason for ms to go with ati . I never said it was the only reason
 
Cyborg said:
once had a guy who just finished flight school try to tell me that sails in sailboats don't operate according the same laws of physics as wings on airplanes. Like objects moving through air horizontally are somehow different from objects moving vertically ?!? Once he had staked out his postion, nothing could convince him of his wrongheadedness

Not to go too much off topic but he's right, "laws of physics" might be a wrong term to use but i understand what he means, planes rely on pressure created by a vortex along the wing's surface for the atmosphere to "suck up" the plane in the air. Its a totally different approach than a sailboat who just rely on its sail to catch the wind.

Think about what you just said? A sail relies on pressure created by a vortex along the sails surface... by the way ;)
 
DaveBaumann said:
What?

By that criteria you are saying R300 is neither a new architecture over R200 and just a linear increase - clearly a rediculous comment Vince.

Dave, common now. We all know that a typical process advance yeilds anywhere from a 30-80% increase in logic budget. It's absolutely huge in absolute and relatived terms.

To design a new architecture, generally based around a DX/OGL revision, would seem to be the time you'd want that extra logic. And then take your architecture and linearly evolve it on what will become a more stable process for a revision.

This is what my comment entailed, and your responces totally missed. I have no idea what your talking about. I'm going to guess you're not saying that using the more advanced process is a bad thing (which would be totally bogus), but you're fixating on how spreading the risk (eg. New Arch on old process, revision on new) is better than pushing your new architecture on a new process. Which has little historical precedent - NV10/15, NV20/25 all followed this sucessfully and as can be seen by the large increase in transistor budgets... it's worked well in the past as far as lithography is involved.

Architecturally R300 is more or less completely different to R200. They delivered that nearly doubling the complexity over R200 and increasing clock speed - clearly this proved your statement is complete falacy.

In what way? What you're stating is totally irrelevent. 150nm is more expensive than 130nm (per die, not mask), it's less advanced in thermal attributes, less dense in logic.

Without invoking fanb0y qualities of ATI-nVidia, how can you spin utilizing an older process as a good thing? We did this with 0.25e and it was the wrong decision.

It's also clear that no other graphics manufacturer has pushed the 150um process to the limit yet - good use of lithography also means nderstanding how far you can go with current processes, something that NVIDIA obviously didn't grasp on the chnageover from NV25 to NV30.

Ok, this is a fanb0y comment. 130nm allowed for low-K dielectrics, it's cheaper per die, it allows for higher transistor counts, lower thermal dissipation... the list goes on.

But, yet, I'll get 30 replies about how this one time it didn't work out - so it's the de facto standard. So, the one time there is lithography problems and design faults... it becomes the standard. But we'll forget about the other 6 sucessfull times it's worked prior.
 
Dave, common now. We all know that a typical process advance yeilds anywhere from a 30-80% increase in logic budget. It's absolutely huge in absolute and relatived terms.

Vince - what was the transistor differnce between R200 and R300? By my calcs about 78% (and its not that different from NV25 to R300) - thats "absolutely huge in absolute terms", can you acknowledge that? They achieved on the same process where you think is required to go to a new process.

To design a new architecture, generally based around a DX/OGL revision, would seem to be the time you'd want that extra logic.

You might want it, the question is whether you need it. History disctates that in this case it was not needed, and indeed the performances show that not only was it not needed, but could still be achived with the performance being on the side of the 150nm designed part.

This is what my comment entailed, and your responces totally missed. I have no idea what your talking about.

Vince, you stated: "Dave, you should need the new processes to facilitate your new architectures" -- the change from R200 to R300 is clear evidence that this is not the case! Its a new architecture, meeting (exceeding) the needs of a major new API, doubling the performance of its predecessor and all on the same process as its predecessor, ergo you don't need a new process to facilitate a new architecture. Can you see that?

I'm going to guess you're not saying that using the more advanced process is a bad thing

Using a more advanced process when its not ready is a bad thing - both ATI and NVIDIA were warned by TSMC that what NVIDIA wanted to achieve with NV30 would not ready for some time; from this ATI subsequently chose the 150nm process NVIDIA chose to forge on despite those warnings. For R300 and NV30 which was the correct choice?

In what way? What you're stating is totally irrelevent.

Again Vince, you stated "you should need the new processes to facilitate your new architectures" - that is quite obviously not the case.

150nm is more expensive than 130nm (per die, not mask), it's less advanced in thermal attributes, less dense in logic.

Die size and thermal attributes are also very dependant on what you do with your chip. NV30 and R300 have roughly the same die size and the thermal properties also brobably lie with R300, even at similar clock speeds (also handing the performance to R300)

Without invoking fanb0y qualities of ATI-nVidia, how can you spin utilizing an older process as a good thing?

Using smaller processes at the right time for the right requirements is the good thing to do. ATI did not need to utilise a smaller process to meet the demands of DX9 whilst still giving large gains in performance over all previous parts - was it not a good thing that they did use 150nm? We'd likely not have seen a DX9 part before 2003 had they not done this.

It's also clear that no other graphics manufacturer has pushed the 150um process to the limit yet - good use of lithography also means nderstanding how far you can go with current processes, something that NVIDIA obviously didn't grasp on the chnageover from NV25 to NV30.
Ok, this is a fanb0y comment. 130nm allowed for low-K dielectrics, it's cheaper per die, it allows for higher transistor counts, lower thermal dissipation... the list goes on.

:rolleyes: No Vince. Did any other graphics manufacturer get anywhere near to the ulitisation of the 150nm process that ATI did?

As for what 130nm brings - yeah, thats the case, but were any of those things brought to consumers in 2002? No. I can sit there and list the benefits of the 90nm based process, but am am going to be actually able to use these benefit to make my gaming better now? No.

But, yet, I'll get 30 replies about how this one time it didn't work out - so it's the de facto standard. So, the one time there is lithography problems and design faults... it becomes the standard. But we'll forget about the other 6 sucessfull times it's worked prior.

The entire thing is a crapshoot - sometimes you get lucky, others you don't. However, as I said to fully understand the headroom of the current processes is as necesary as understanding the advantages (and drawbacks) of bleeding edge processes. Both ATI and NVIDIA have warned of increasing cycle times (in both metal revisions and in new processes entirely) so understanding when one process has a suitable amount of headroom left and when a newer one is going to be even more critical as time goes by - making a mistake of shooting for a process that isn't entirely ready yet may cause even more delays as we move on.
 
Dave, this is very simple. You're taking a general argument and adding layers to it that are not necessary and add special case scenarios to it - and they are just that... special cases.

All things being equal, N Architecture will be more economical to produce on what process?

All things being equal, N Architecture can support more logic on what process?

All things being equal, N Architecture has more oppertunity in way of preformance and it's associated attributes on what process?

DaveBaumann said:
The entire thing is a crapshoot - sometimes you get lucky, others you don't.

This is the best comment you've made this far. It is a crapshoot, and all the special cases scenarios surrounding NV3x and R3x0 are utterly irrelevent from the standpoint of this discussion or any that is forward looking. Special cases, like what Joe and WHQL love to propogate, are nothing more than fanb0y excuses and arguing-points when you attempt to look at the underlying dynamics of a future situation. You must come to the realization that special cases are utterly irrelevent to any case but that - and as such aren't a good metric for projected observation.

But, what is a good indicator are general trends based on precedent and the fundimental and governing laws of design. You forgo all of these in your persuit of a half-truth based around a single case scenario - the R300. What happened before? What happened after? What you're doing is just as bad as Joe or Whql - you're focusing on a single case and not the more fundimental picture. It's like the Dow, there will be special cases that allow extrapolation for negative growth going forward, but if your attempting to look forward you must do what Laffer, Kadlec and Acampora do by looking at long-term growth averages and percentage based growth.

You talk of the R200 -> R300 jump as if it's supporting your case. How much of the supposed increase is because of logic? How much is do to RAM advances (also lithography based). As for the 150nm process, it's nVidia's fault that ATI is anywhere near them. Nobody is debating this, it was their fault that anyone on an inferior process came close to them in a time when computational resources have such influence on overall preformance - but it hints at problems further up the development cycle; not the fault of lithography. If you can't see this then it's your own fault, but as I already said and nobody has answered - what's ATI going to do when nVidia launches a product based on a derivative 11S or 12S process out of IBM?

Do you even read what you're writing bud? What company in a preformance based market has ever sucessed long-term by not taking risks and pushing the envelope? If you want to win in this game you need to live on the edge - I would have thought you'd have figured this out after 3dfx and their design methodology. If it's nVidia or STI or someone else... ATI will fall if they stick with their current lithography policy. I don't see pleasent things for a preformance based company who thinks like you do.
 
Vince - what was the transistor differnce between R200 and R300? By my calcs about 78% (and its not that different from NV25 to R300) - thats "absolutely huge in absolute terms", can you acknowledge that? They achieved on the same process where you think is required to go to a new process.

Well in regards to the pc arena that's right... but in the console(well this is a thread about consoles...) space... that is not necessarily the case...
 
Well in regards to the pc arena that's right... but in the console(well this is a thread about consoles...) space... that is not necessarily the case...

I think you're failing to factor in time and the fact we're talking about staying within a process. This probably never happened with a console since they're released about 5 years apart.
 
Status
Not open for further replies.
Back
Top