Official: ATI in XBox Next

Status
Not open for further replies.
Vince said:
On ATI fanatacism in Individuals (not Corperations).

Oh Geezus....Let's talk about ATI Fanaticism, as if nVidia fanatacism is wholly absent not only from the posts of others, but your own. :rolleyes:

I was skimming threw the thread when I got back and can't help but see the blatent fanatacism that there is based on solely IHV grounds.

Right...such as those posts insisting that this technology partnership is some MS ruse to get nVidia to cave into a better deal?

What really got my attention is the blatent denial.

Indeed. Try this one on for size: MS is partnering with ATI for the next x-box. And they did so because they believe that the combination of ATI's future technology, (which includes performance, cost, power consumption, features, etc.) and ATI's businessmodel, is overall superior to nVidia's.

End of story. No denial required.

I ask Joe what happened to R400 and he comes back with, "Who needs the R400 at this juncture, Vince". Super, answer my damn question this time... ok?

OK, you want me to answer you damn question? I'll do it again. Who needs the R400 at this time?.

But I'll spell it out for you.

As far as we know, R400 is a DX10 based architecture. What happened to it, is that ATI realized that, considering they already have the superior DX9 core, that what they really need is a [/i]mostly[/i] a faster core, and don't need a new nbext-generation core at this time to remain competitive in the PC space. DX10 won't be useful until 2004 or 5. Technology and features wise, R3xx is superior to NV3x. ATI is the "deafcto standard" upon which DX9 games are built....and if desired, nVidia cards are "optimized for."

In short: ATI does not need at this time, to improve the characteristics of their core, just performance.

The same situation that nVidia was in with the DX8 era. nVidia didn't NEED the NV30 to comete with the Radeon 8500. They just needed a "faster" Geforce3 core. Hence, the GeForce4 ti.

It would be different if ATI had the "inferior" DX9 core. That is, if nVidia was the defacto DX9 satandard, then ATI would need to get the "next gen core" on the market ASAP, to try and get the technology leadership back. I'm sure this is what nVidia is trying to do, by pushing the NV40 out as soon as possible. In all liklihood though, it's too late for nVidia in the DX9 era....if ATI can come up with a Loki that is as fast or faster than the NV40.....even if it's not as feature rich.

So where is the R400? It was scrapped, because it wasn't a wise business move to persue it at this time. It is smarter (less risky) to go after Loki.

I can understand why people like ATI, who likes nVidia?

I can't understand why people don't like ATI.

What I can't understand is how ATI has become such a master of the IP and technology.

Who says they are? They are certainly the current technology leaders in my book...that doesn't mean they will be tomorrow or have a lock on it.

(See DaveH's post about placing too much blame on the faltering of nvidia, rather than the success and technical achievememt of the R300).
 
Vince said:
On ATI, nVidia, Microsoft, and XBox.

Ok, why I propose that nVidia is the superior IHV to deliver a completed project is the following.

nVidia, as someone on B3D stated, is a semiconductor company as opposed to ATI and their current IP agreement. The varience is that nVidia would control the back-end design in a singular development pipeline, utilizing IBM which is an advanced group - an avant-garde of the silicon world ranking upthere with Intel. This tighter process, along with nVidia's history of advanced designs utilizing bleeding-edge lithography is a good proposition IMHO.

However, this deal if for IP and as such at this time has a level of equivocacy to it that I'll now discuss. IMHO, I see two basic and reasonable paths that this agreement and technology could transpire upon.

  • ATI sells Microsoft [MS] a completed front-end design and MS outsources (perhaps to ATI in a future agreement) the synthesis, et al.
  • ATI breaks the R500 down into functional constructs [eg. comperable to a discrete "Shader Unit"] and sells these to Microsoft who utilizes a physical reuse type scheme based on Fab, etc.

So, the first option still houses the lithography based problems I previously discussed concerning formulating a part for XXnm process, although it would yeild a more competitive and effecient design to the baseline of full inhouse design as nVidia did with the NV2A.

The second option is what I'm currently thinking is possible. It's more scalable and flexible in terms of lithography changes, as well as matching recent ATI actions. The downside that I'd guess is that by utilizing modular blocks and just physical reusing them, you'd have a lower die-area effeciency. Perhaps making back-end design harder and resulting in unused die-area and such things. Perhaps if Mfa or someone of his calibur would comment?

Thus, it's still my belief that nVidia would have been the better solution in terms of an end-product - bias withheld. Although, as has often been the case, intracorperate politic is a powerful force.

Nice post. You sound very reasonable when you're not talking about anything Sony. ;) I must add to your thinking and say that I find the first option really unlikely. 2005 and 2006 just so happens to be the years where the industry starts to move from 90nm to 65nm. In the first option you gave, assuming that ATI will wait for a process to mature before using it like they have done with .13u, would mean ATI will be at .09u, which would put at large disadvantage to a company which has moved to .065u as soon as possible. The disavantage is large enough that I don't think a completely front-end design from ATI will even be considered by MS.
 
Hum it seems you missed that whole year and R300 and NV30 launches.
It s not because you are at the lower process that your core will be the best performer....
 
nonamer said:
2005 and 2006 just so happens to be the years where the industry starts to move from 90nm to 65nm. In the first option you gave, assuming that ATI will wait for a process to mature before using it like they have done with .13u, would mean ATI will be at .09u, which would put at large disadvantage to a company which has moved to .065u as soon as possible. The disavantage is large enough that I don't think a completely front-end design from ATI will even be considered by MS.

There are two major issues with that line of thought.

1) It's possible, if not likely, that the licensing deal doesn't have ATI choosing the fab or the process upon which the GPU is going to be manufactured. In other words, chosing the graphics IP company has little to do with choosing process upon which the product is built.

2) As PatrickL is saying: Even if it is "ATI on 0.9u vs. nVidia on 0.65u"...recent history tells us that the former route yields a better product.
 
nonamer said:
Vince said:
On ATI, nVidia, Microsoft, and XBox.

Ok, why I propose that nVidia is the superior IHV to deliver a completed project is the following.

nVidia, as someone on B3D stated, is a semiconductor company as opposed to ATI and their current IP agreement. The varience is that nVidia would control the back-end design in a singular development pipeline, utilizing IBM which is an advanced group - an avant-garde of the silicon world ranking upthere with Intel. This tighter process, along with nVidia's history of advanced designs utilizing bleeding-edge lithography is a good proposition IMHO.

However, this deal if for IP and as such at this time has a level of equivocacy to it that I'll now discuss. IMHO, I see two basic and reasonable paths that this agreement and technology could transpire upon.

  • ATI sells Microsoft [MS] a completed front-end design and MS outsources (perhaps to ATI in a future agreement) the synthesis, et al.
  • ATI breaks the R500 down into functional constructs [eg. comperable to a discrete "Shader Unit"] and sells these to Microsoft who utilizes a physical reuse type scheme based on Fab, etc.

So, the first option still houses the lithography based problems I previously discussed concerning formulating a part for XXnm process, although it would yeild a more competitive and effecient design to the baseline of full inhouse design as nVidia did with the NV2A.

The second option is what I'm currently thinking is possible. It's more scalable and flexible in terms of lithography changes, as well as matching recent ATI actions. The downside that I'd guess is that by utilizing modular blocks and just physical reusing them, you'd have a lower die-area effeciency. Perhaps making back-end design harder and resulting in unused die-area and such things. Perhaps if Mfa or someone of his calibur would comment?

Thus, it's still my belief that nVidia would have been the better solution in terms of an end-product - bias withheld. Although, as has often been the case, intracorperate politic is a powerful force.

Nice post. You sound very reasonable when you're not talking about anything Sony. ;) I must add to your thinking and say that I find the first option really unlikely. 2005 and 2006 just so happens to be the years where the industry starts to move from 90nm to 65nm. In the first option you gave, assuming that ATI will wait for a process to mature before using it like they have done with .13u, would mean ATI will be at .09u, which would put at large disadvantage to a company which has moved to .065u as soon as possible. The disavantage is large enough that I don't think a completely front-end design from ATI will even be considered by MS.

I think its to early to say who is going to fab the chip.

Think about this though. The Micron lead GDDR-3 group had who as a major backer from the graphics industry? ATI. Who just a few years back seemed to have serious ambitions with chipsets and GPU's with eDRAM? Micron. Who has a Intel bus liscence among other things? ATI. Who has (had at least) an AMD chipset? Micron.

What I'm getting is that between these two, they could design and fabricate just about any chipset for either an AMD or Intel CPU. This would give some flexability for Microsoft to choose who to go with CPU wise. Perhaps the two corporate cultures of ATI and Micron getting along with each other influenced Microsoft a little in the decision?
 
PatrickL said:
Hum it seems you missed that whole year and R300 and NV30 launches.
It s not because you are at the lower process that your core will be the best performer....

I've seen quiet a few times that this has been mentioned when the issue of moving to a better process for performance reasons arises. However, there are some majors differences:

  • The jump from .09u to .065u is a lot bigger than the jump between .15u to .13u.
  • The NV30 was a weaker implementation of .13u than the R300 was on .15u (physically, NV30 was smaller than the R300 IIRC).
  • The architecture of NV30 and R300 were different, but not in this case since it is mostly the same architecture but on different processes.

The only real problem with going early with a more advanced process is that it could take some time for it to get up to speed, as seen with the current batch of GPUs. Doing it may push XB2's launch back to 2006 if .065u is as difficult to implement as .13u seems to be, even though .065u is available in 2005 from many fabs.
 
As far as we know, R400 is a DX10 based architecture.

And what is a DX10 based architecture? And when will it arrive? Kinda odd consider how far away Longhorn is, and MS's focus on DX9.L for it. I don't see them really doing much with regards to stuff they might want to put in DX10 (whatever that will be), as the biggest focus is the massive redesign of drivers required for Longhorn (and something that GPU venders have been learning developing drivers for OS X).
 
The NV30 was a weaker implementation of .13u than the R300 was on .15u (physically, NV30 was smaller than the R300 IIRC).

They had roughly the same die size.

[edit] ATI doesn't seem to have had the same difficulaties with their 130nm implemtation as well - RV350 has consistently been cited as having great yields; NV31 (roughly the same complexity and die size as RV350) is one of the chips that has been said to have the continuing yields issues that plague NVIDIA at present.
 
Even if the pictiure is identical, the text on it isn't. So why would Kutaragi go and relabel the picture with wrong numbers? To mislead the competition? Remember this speech was given in September 2002. Even the article says Kutaragi said 1 cell = 1gflop.

Kutaragi, was explaining the cellular architecture. Deep Blue is one implementation of it.

I have to ask others on this forum this question. Is it possible to make a chip that has 4 general processor cores and 32 "Altivec"-like units on it, using a 65nm process, without problems like too large a die size, power, and heat?

Sure. size and heat could be problem, but then they just have to improve their process and cooling solution.

I'm not saying it absolutely can't be done, just skeptical.

Yes, its skeptical.

But, with this cellular design, they can improve yield similar to how ATI can turn their bad 9800 to 9600. With this processors layout that will power everything in Sony line, Sony can do that.

If this was a uniprocessor design, than it could be a problem.
 
nonamer said:
The jump from .09u to .065u is a lot bigger than the jump between .15u to .13u.

That swings both ways. In other words, a "lot bigger of a jump" not only brings a bigger opportunity for improviements, but larger risks as well.

The NV30 was a weaker implementation of .13u than the R300 was on .15u (physically, NV30 was smaller than the R300 IIRC).

See Dave's post. But regardless....you are now saying that although nVidia went with a "weaker" NV30 design...that they would not do the same with an early 0.65 part? (Make it a "weak" 0.65 part, vs. ATI making a "strong" 0.90 part?)

The architecture of NV30 and R300 were different, but not in this case since it is mostly the same architecture but on different processes.

No idea what you're getting at here.

The only real problem with going early with a more advanced process is that it could take some time for it to get up to speed, as seen with the current batch of GPUs.

That's not just some little problem. That's a BIG problem. Console launches require huge amounts of inventory at launch. You take big risks for relying too heavily on technology out of your control (readines of advanced fabs, or advanced memory tech.) Big risks can have big rewards of course....but also big failures. It all depends on Microsoft's risk tolerance.
 
DaveBaumann said:
The NV30 was a weaker implementation of .13u than the R300 was on .15u (physically, NV30 was smaller than the R300 IIRC).

They had roughly the same die size.

[edit] ATI doesn't seem to have had the same difficulaties with their 130nm implemtation as well - RV350 has consistently been cited as having great yields; NV31 (roughly the same complexity and die size as RV350) is one of the chips that has been said to have the continuing yields issues that plague NVIDIA at present.

Are you sure? link

If the R300 had 107-110 million transistors (and approx. 219mm^2) and the NV30 had 125M, then:

.13^2 / .15^2 * 125/107 * 219 ~ 192mm^2. Not quiet the same, but I suppose that's close enough under some measurement. Anyhow, we should see a R3X0 going against a NV40 or R420 which are going to much better implementations of .13u to really see the difference between .15u and .13u.

Joe DeFuria said:
nonamer wrote:
The jump from .09u to .065u is a lot bigger than the jump between .15u to .13u.

That swings both ways. In other words, a "lot bigger of a jump" not only brings a bigger opportunity for improviements, but larger risks as well.

Irrelevant, and read below.

Quote:
The NV30 was a weaker implementation of .13u than the R300 was on .15u (physically, NV30 was smaller than the R300 IIRC).


See Dave's post. But regardless....you are now saying that although nVidia went with a "weaker" NV30 design...that they would not do the same with an early 0.65 part? (Make it a "weak" 0.65 part, vs. ATI making a "strong" 0.90 part?)

Read below.

Quote:
The architecture of NV30 and R300 were different, but not in this case since it is mostly the same architecture but on different processes.


No idea what you're getting at here.

If you read Vince's post, you'll see I was talking about letting ATI design the whole thing and putting it on 90nm vs MS using the IP with some redesigning and then putting on 65nm. In that situation, you won't have a NV30 vs R300 problem where the architecture was different.

Quote:
The only real problem with going early with a more advanced process is that it could take some time for it to get up to speed, as seen with the current batch of GPUs.


That's not just some little problem. That's a BIG problem. Console launches require huge amounts of inventory at launch. You take big risks for relying too heavily on technology out of your control (readines of advanced fabs, or advanced memory tech.) Big risks can have big rewards of course....but also big failures. It all depends on Microsoft's risk tolerance.

Duh. That's why I said going on 65nm may force a 2006 launch. Stop selectively reading next time.

If you suggest that MS will release XB2 with a GPU on 90nm because it's safer than I have to disagree. From all the PS3 topics on this board it looks like the PS3 will be a strong implementation of 65nm (meaning it is big even on 65nm). Anything at 90nm, did well or poorly, would make poor competition performance wise (unless Cell is not all it's crack up to be, ;) to Sony fans). Besides, since the PS3 is on 65nm, then an XBox2 on the same process would be out around the same time assuming MS spends the same amount of money in doing it.
 
.13^2 / .15^2 * 125/107 * 219 ~ 192mm^2. Not quiet the same, but I suppose that's close enough under some measurement.

You are mistakenly assuming that they used the available die space in the same fashion. Space used != number of transitors at a given size.
 
nonamer said:
...Irrelevant, and read below.

....Read below.

If you read Vince's post, you'll see I was talking about letting ATI design the whole thing and putting it on 90nm vs MS using the IP with some redesigning and then putting on 65nm. In that situation, you won't have a NV30 vs R300 problem where the architecture was different.

Then what exactly are you trying to say here?

Are you arguing for or against a true "fabless semi-con" model vs. a IP licensing model?

Why would one model have the option of 0.65, and the other model not? And how, if MS is licensing the IP and going to fab at 0.65, does this make for a worse chip than whatever you are dreaming nVidia would have come up with?

Duh. That's why I said going on 65nm may force a 2006 launch. Stop selectively reading next time.

I'm not selectively reading. I'm trying to make sense of what you are posting.

If you suggest that MS will release XB2 with a GPU on 90nm because it's safer than I have to disagree.

I'm suggesting that there certainly ARE valid reasons for MS to do so. Will they? I don't know. I'm not guessing one way or the other.

The point is, whether or not MS goes with 0.65 or 0.9, doesn't make one hill of beans difference with respect to who they chose to partner up with technology wise. Especially if we're talking about IP licensing.

From all the PS3 topics on this board it looks like the PS3 will be a strong implementation of 65nm (meaning it is big even on 65nm).

Look...your entire argument is based around some perceived "power" of the PS3. These architectures are completely different. You just can't say that PS3 on 0.65 is inherently better than a more "conventional" GPU at 0.65 or 0.9.

Anything at 90nm, did well or poorly, would make poor competition performance wise (unless Cell is not all it's crack up to be, ;) to Sony fans).

Right.
 
A little knowledge is a dangerous thing ;) I once had a guy who just finished flight school try to tell me that sails in sailboats don't operate according the same laws of physics as wings on airplanes. Like objects moving through air horizontally are somehow different from objects moving vertically ?!? Once he had staked out his postion, nothing could convince him of his wrongheadedness. Just food for thought. :D
 
Joe, I've got to get my ass to the Sox game so I'll respond to your semantics later. But, while I'm gone I want to hear from you for a change.

What do you think this deal entails? You keep mentioning IP, but it's a term that's not descript and basically a way for you to hide without putting down a specific proposition.

What is IP? Is ATI going to hand off a netlist to MS for their IC with maybe 60M logic gates? Or a GDSII? What?

And beyond that, what do you think is the net effect of breaking up the design pipeline as you propose? Go into some detail, use some of your EE knowledge and I'm looking forward to your post.

PS. I'm not going to overlook your responce to me, it's comming when I have time.
 
Joe DeFuria said:
nonamer said:
...Irrelevant, and read below.

....Read below.

If you read Vince's post, you'll see I was talking about letting ATI design the whole thing and putting it on 90nm vs MS using the IP with some redesigning and then putting on 65nm. In that situation, you won't have a NV30 vs R300 problem where the architecture was different.

Then what exactly are you trying to say here?

Are you arguing for or against a true "fabless semi-con" model vs. a IP licensing model?

Why would one model have the option of 0.65, and the other model not? And how, if MS is licensing the IP and going to fab at 0.65, does this make for a worse chip than whatever you are dreaming nVidia would have come up with?

First, I think you should read Vince's post since that was what I was refering to in this whole line of talk. This whole thing was made in the assumption that ATI will continue there practice of holding back in using the latest process, and thus will stick to 90nm when 65nm is first available. If MS doesn't follow in the footsteps of ATI and goes ahead with 65nm, then they will have to do some redesigning of their own, which was Vince's other option.

I don't know why you're bringing Nvidia into this debate. That other debate is over, at least for now.

Duh. That's why I said going on 65nm may force a 2006 launch. Stop selectively reading next time.

I'm not selectively reading. I'm trying to make sense of what you are posting.[/quote]

Maybe you're just reading too fast. I don't see how I'm not making sense.

If you suggest that MS will release XB2 with a GPU on 90nm because it's safer than I have to disagree.

I'm suggesting that there certainly ARE valid reasons for MS to do so. Will they? I don't know. I'm not guessing one way or the other.

The point is, whether or not MS goes with 0.65 or 0.9, doesn't make one hill of beans difference with respect to who they chose to partner up with technology wise. Especially if we're talking about IP licensing.[/quote]

Like I said, that other debate is over for now. I'm saying is that MS will want something at 65nm and that's I mentioned that only Vince's second option is viable.

From all the PS3 topics on this board it looks like the PS3 will be a strong implementation of 65nm (meaning it is big even on 65nm).

Look...your entire argument is based around some perceived "power" of the PS3. These architectures are completely different. You just can't say that PS3 on 0.65 is inherently better than a more "conventional" GPU at 0.65 or 0.9.

Of course. Although I don't share the Sony fans opinion that the PS3 is unbeatable powerwise, it should be still powerful. While I can't say that the PS3 is inherently more powerful than a GPU, you can't say the opposite either. Chances are, if chip A is twice as big as chip B, then chip A should be more powerful. Thus, IMO if XB2 is to have a real and serious chance of match of exceeding the competition, then .065u is desirable to say the least.

Anything at 90nm, did well or poorly, would make poor competition performance wise (unless Cell is not all it's crack up to be, ;) to Sony fans).

Right.[/quote]

I believe you are agreeing with what I said in parentheses because otherwise you have conceeded the debate. :D
 
Actually, Vince's second option doesn't make any sense. "Physical reuse" is just that, physical - so for something to be "reused" it is has to be reused in a process that it was initially used from, i.e. you can only go from one 90nm process to another 90nm process, you can't go from 90nm to 65nm process via "physical reuse".

I'd suggest that its more likely that if ATI do not do the back end then they will hand over the core logic to MS in its entirety and MS will seek someone to do that. If MS have a particular fab in mind, and ATI have previously dealt with that fab then its probably the case that ATI would do it, if there is another fab in mind then it may well be someone else.
 
Vince said:
Joe, I've got to get my ass to the Sox game so I'll respond to your semantics later. But, while I'm gone I want to hear from you for a change.

Bleh. Don't respond to semantics and perceived bias for a change. Respond to points made.

What do you think this deal entails?

See Dave's last post.

Basically, I think MS has already chosen a Fab and Process target. It may or may not be with someone that ATI already has direct experience with.

PS. I'm not going to overlook your responce to me, it's comming when I have time.

Am I supposed to take that like I should be afraid, or that I was hoping you would "overlook it?" :rolleyes:

Whatever your response, take a hint: don't respond as if it's you against the "He-Man nVidia Hater's Club."
 
nonamer said:
First, I think you should read Vince's post since that was what I was refering to in this whole line of talk. This whole thing was made in the assumption that ATI will continue there practice of holding back in using the latest process, and thus will stick to 90nm when 65nm is first available.

Right...and that's a rather large assumption to make if MS is licensing technology, and not buying graphics chips.

If MS doesn't follow in the footsteps of ATI and goes ahead with 65nm, then they will have to do some redesigning of their own, which was Vince's other option.

No, MS will not have to do "redesigning". MS's contracted engineering team will have to take ATI's core logic design and build a chip using it.

See Sega / NEC / PowerVR relationship.

I don't know why you're bringing Nvidia into this debate. That other debate is over, at least for now.

??

So then what is "this" debate about? You agree then nVidia isn't the better partner for MS?

Like I said, that other debate is over for now. I'm saying is that MS will want something at 65nm and that's I mentioned that only Vince's second option is viable.

Again, I have to ask why. Why is 65nm so important for you? They certainly may want 65 nm, but why have you decided the risks outweigh the rewards...and why does Microsoft agree with you?

Of course. Although I don't share the Sony fans opinion that the PS3 is unbeatable powerwise, it should be still powerful. While I can't say that the PS3 is inherently more powerful than a GPU, you can't say the opposite either.

Correct. Which is why I never did. But you do appear to be saying PS3 is in fact inherently more powerful...since you are so adamant that MS "wants" 0.65.

Chances are, if chip A is twice as big as chip B, then chip A should be more powerful.

That's only true if the architectures are remotely similar. Cell and future ATI GPU, as far as we know, are not.

Thus, IMO if XB2 is to have a real and serious chance of match of exceeding the competition, then .065u is desirable to say the least.

Sorry, I just don't agree with that logic. I mean "more transistors, all else being equal" is of course more desirable. But all else isn't eqaual. There's higher risks for going with more advanced processes.

I believe you are agreeing with what I said in parentheses because otherwise you have conceeded the debate. :D

No, I agree that PS3 (cell) may not be all it's cracked up to be by Sony fans. ;)
 
I am a bit confused here.
Vince said:
nVidia, as someone on B3D stated, is a semiconductor company as opposed to ATI and their current IP agreement.
How is ATI any less or more of a semiconductor company than nVidia?
 
Status
Not open for further replies.
Back
Top