Official: ATI in XBox Next

Status
Not open for further replies.
arhra said:
If they could produce a single-chip which runs at 1tflop, and which is going to be in the ps3, why do they need a whole cabinet (something like a 40U server rack, i'd assume) to reach 16tflops? Is the PS3 going to be the size of a 2U server? Would make the xbox seem positively svelte...

I am not sure where that image comes from (I remember seeing it somewhere), but I think it is probably outdated. Anyway, I never said PS3 will use a "Broadband Engine" (although I think it is likely), was just trying to clarify on the terminology a bit.
 
zurich said:
tgsf15.jpg

So assuming each chip costs $150, I could enslave the human race for only $1,728,000,000?! Sweet, all I need to do is take control of one major 1st world country, and my domination of the world is ensured.....
 
I go away for a weekend and the argument goes to shit (and shit we already covered).

First to address the Cell slide from TGS 2002.

bbot said:
I used to believe until today that the cell chip would be 1tflops. Of course there was this Japanese article of a speech given by Ken Kutaragi at TGS 2002, that indicated that a single cell-chip will be only 32gflops.

Obviously whomever prepared the presentation was lazy as it's lifted off IBM's initial Blue Gene Project. It's [the project and subsequently the chart] pre-STI and is from around early 1999.

<img src=http://www.ibm.com/es/press/fotos/ciencia/i/petaflop.jpg height=120 width=150>

Do a Google Image search for --> "Blue Gene" IBM &lt;-- and you'll find the diagram around 50 times all from initial Blue Gene program disclosures around 1999.

All contemporary Cell talk (ie. GDC 2002) talked of the new number. You guys are arguing over a non-issue, in fact it's a non-issue we already discussed here.
 
First, on the NV3x - R300:

Whql, pay attention. I'm not going to repeat this to you.
PiNkY said:
Vince, the differentation you make between the "advancedness" of nv30s design and its somewhat sub-par performance really is a bit beyond my grasp.

I'm making an architectural distinction between how the R300 and NV30 were designed. For example, to quote the Baumann:

Dave on Geoff Ballew said:
Question: The developer documentation was quoting vertex throughput rates of 1.5 times GeForce4's throughput on a clock for clock basis, which would equate to 3 Vertex Shaders.

Answer: The 1.5 number is old, because we actually have 3 times the vertex performance of Ti4600, at 500MHz.

As we moved to more programmability, instead of implementing a smaller Vertex Shader and replicating it we tackled the problem from a different direction. And we have a pool of calculating units and an intelligent scheduling mechanism at the front of it. So instead of an entire Vertex Shader what we have made is a sea of vertex math engines that are all glued together with this interface.

What I stated in theory (and still contend is very correct) is that in the comming cycles we'll find that due to the underlying similarities between these types of computation and the fact that the underlying architecture is similiar in each (+/-) case, we'll eventually see the concept of fixed discreet units and pipelines dissolve. We'll have computational resources that are flexible and can be arbitrarily assigned on a per-task basis for tasks that aren't highly iterative. This would be a natural outgrowth of the current trend and the Shaders will see this synergy first IMHO.

Now, what I stated in praxis several times:

Vince said:
The future is an advancement following NV3x's direction, or more like Cell - where you have almost full computational flexibility threw the pipeline except where the task is excessively iterative and dedicated logic is the way to go.

Vince said:
OK, this discussion is going nowhere fast. Please go read the discussions concerning the NV3x discussions in the 3D Architecture forum and then come back, take note of the TCL front-end and what it's composed of. Then look at where 3D processing is going [eg. Unified Shaders] and think about the R300. Now tell me which is closer...

I think this is clear, I think a sound line of reasoning can be drawn between the NV3x's TCL front-end and the future of Unified Shaders which will be composed of this very type of construct. Especially as opposed to the R300's which still caters to the discrete unit ideology. I don't even see the room for argument here, not that this will stop WHQL from posting some ATI fanatic responce about 24bit color, a "pixel shader," and how ATI is feeding the starving in India.
 
On ATI, nVidia, Microsoft, and XBox.

Ok, why I propose that nVidia is the superior IHV to deliver a completed project is the following.

nVidia, as someone on B3D stated, is a semiconductor company as opposed to ATI and their current IP agreement. The varience is that nVidia would control the back-end design in a singular development pipeline, utilizing IBM which is an advanced group - an avant-garde of the silicon world ranking upthere with Intel. This tighter process, along with nVidia's history of advanced designs utilizing bleeding-edge lithography is a good proposition IMHO.

However, this deal if for IP and as such at this time has a level of equivocacy to it that I'll now discuss. IMHO, I see two basic and reasonable paths that this agreement and technology could transpire upon.

  • ATI sells Microsoft [MS] a completed front-end design and MS outsources (perhaps to ATI in a future agreement) the synthesis, et al.
  • ATI breaks the R500 down into functional constructs [eg. comperable to a discrete "Shader Unit"] and sells these to Microsoft who utilizes a physical reuse type scheme based on Fab, etc.

So, the first option still houses the lithography based problems I previously discussed concerning formulating a part for XXnm process, although it would yeild a more competitive and effecient design to the baseline of full inhouse design as nVidia did with the NV2A.

The second option is what I'm currently thinking is possible. It's more scalable and flexible in terms of lithography changes, as well as matching recent ATI actions. The downside that I'd guess is that by utilizing modular blocks and just physical reusing them, you'd have a lower die-area effeciency. Perhaps making back-end design harder and resulting in unused die-area and such things. Perhaps if Mfa or someone of his calibur would comment?

Thus, it's still my belief that nVidia would have been the better solution in terms of an end-product - bias withheld. Although, as has often been the case, intracorperate politic is a powerful force.
 
On ATI fanatacism in Individuals (not Corperations).

I was skimming threw the thread when I got back and can't help but see the blatent fanatacism that there is based on solely IHV grounds. Not that I'm one to comment based on what I've said in the past - but I wish someone would have called me on it.

What really got my attention is the blatent denial. I ask Joe what happened to R400 and he comes back with, "Who needs the R400 at this juncture, Vince". Super, answer my damn question this time... ok? Maybe you should ask this very question Joe. Regardless of the reason - for better or worse - it's not there and for a product so talked of in the past by ATI, maybe you should ask.

Or Whql and these arguing points like, "first to 100M transistors!" or "did what nobody else could based on what nVidia said!" Ok, and welcome to the console forum. I don't care what nVidia said, your PC IHV bias and talking-points don't cut it here. They're obviously not the first to 100M, nor did they do anything remotly interesting inasfar as lithography goes. But thanks, and enjoy the semantic based backpeddling about what you "didn't say." Sort of how the Enron or Worldcom suits didn't lie, just didn't tell you.

I can understand why people like ATI, who likes nVidia? Lets be honest, they're the devil incaranate and masters of the corperate world like few others. They do what they need to, regardless the cost, and generally do it.

What I can't understand is how ATI has become such a master of the IP and technology. I can't help but feel that there is no true technological basis for this - but more of a psychological bandwagon that people were just dying to find. I mean, lets examine their technological prowess and ascent to the greatness you speak of.

They've had moderate success with the R300 core. It's a solid product, no doubt. But, what should be remembered is that during the same period, nVidia stumbled in several ways. We know of the outward signs such as a much later launch window, and have heard rumors (more like fact around here) that the NV30 had issues and was scaled back quite alot - PPP anyone?

So, the question of ATI's absolute superiority is called into question. Sure they capitalized on nVidia's known stumble, but historically when nVidia hasn't stumbled they've surpassed ATI's efforts. They're design methodology is also slower-paced and I've talked in length about these issues previously.

A similar situation that I keep falling back to just based on the sheer similarities that we know of at this point is the Intel-AMD situation. Where in early 2000, AMD's Athlon took the preformance crown from the Pentium4 and enthusiests (much like us here) were quick to jump on the AMD Athlon bandwagon. AMD clearly had what seemed like the better processor based on absolute numbers and people touted this idea that it was the new AMD, etc. People jumped on the SledgeHammer bandwagon claiming it'll be the next best thing since slicedbread and the glow-in-the-dark latex condom (sort of how the R400 was talked of by ATI around R300's launch).

But, what people (enthusiests and analysts) failed to realize and take note of were comments made by Darrell Boggs at Micro-33. The comments based around the scaling back of the origional architecture were significant because it addresses the manufacturability and economic vitality of the design - not the engineering talent or potential of Intel. The question the becomes, What If they didn't cut back. What If this particular set of circumstances didn't happen? What Is the true level and ability of the comany? Going Forward, what can they do?

The first two questions we can't answer with an certaintly, the third has historical precedence and the fourth is an extrapolation of the third which has low certainty. So, think about this and then ask and decide for yourself....

Is ATI a One Hit Wonder?
 
Fox5 said:
So assuming each chip costs $150, I could enslave the human race for only $1,728,000,000?! Sweet, all I need to do is take control of one major 1st world country, and my domination of the world is ensured.....

No, but you could put together a pretty decent football team :D
 
tgsf15.jpg



<img src=http://www.ibm.com/es/press/fotos/ciencia/i/petaflop.jpg height=200 width=270>

Cell isn't blue gene, Kutaragi states 1TFLOPS, Okamoto at gdc 2002 stated ps3 would have a power of 1TFLOPS, the patent found this year has a design that calls for 1TFLOPS.

But wait.. We find a article which goes against everything that is said so far, let's believe that!

Like I said, Cell isn't blue gene the diagram given for Cell is that of blue gene, two different things. This was a mess up.[/code]
 
bbot said:
Believe what you want. That Nikkei article indicates it was published in 2001.

You're going to give Panajev a stroke when he gets home. Seriously, I told you where the graphic came from and the timeframe (late 1999 from what I can tell).

For those who like Links: http://images.google.com/images?svnum=10&hl=en&lr=&ie=UTF-8&oe=UTF-8&q=IBM+Blue+Gene&spell=1

I mean the very idea that this is lifted from Project Blue Gene gives it zero plausibility in light of a Patent filed by SCEI, and more specifically Masakazu Suzuoki. Grasp. Straws. Must.
 
First I think we should be more picky when choosing which analogies fits a certain situation.

Second what if AMD had the same amount of resources as Intel? ;)

Oh btw that slide from SONY is missing something....The Universe!!! :p ;)
 
Vince said:
On ATI fanatacism.
...

Dude, you're thinking too much. Companies like Microsoft, Nvidia and ATI don't make business decisions for fanboish reasons. There's not a lot of players in the market. MS and Nviida has some issues between them. ATI were obviously well positioned to offer MS an alternative. At the moment MS made their decision they felt ATI was a better choice. It's not like we're talking about orders of maginitude difference in either company's ability to deliver a good product--both ATI and Nvidia have a pretty good track record in the GPU biz.
 
TheMightyPuck said:
Dude, you're thinking too much. Companies like Microsoft, Nvidia and ATI don't make business decisions for fanboish reasons

I was talking about some people in this forum -- believe me, I totally agree with you. ;)
 
Vince-

I sympathize with your thoughts about some of the ATI cheerleading in this forum. It's certainly true that, since launching the R300 a year ago, ATI has produced only a very competent but somewhat unexciting mainstream version and a high-end refresh that is hardly more than a respin. And I agree that their long-term roadmap looks ill-defined but underwhelming, and is perhaps even in disarray.

But I think you're dead wrong to attribute the success of the R300, and R3x0 in general, to nothing more than Nvidia's NV3x missteps. When the 9700 Pro hit a year ago, it dominated the market--in all the relevant measures anyway, meaning AA and AF quality and performance in current games, and featureset (at reasonable performance) for future games--like no graphics card has since probably the Voodoo2. And unlike the situation back with the V2--which dominated an immature field in which the best competition was arguably still its precessor the Voodoo1--the 9700 Pro was smacking around a GF4 Ti which had been introduced a mere 6 months previously :)!:) and which had seemed a killer part at the time.

Moreover, despite the obvious flaws in the NV3x design, evaluating it objectively I don't think one can say it represents a lower level of execution on Nvidia's part than they've enjoyed in the past. The most high-profile embarrassments--NV30's FXFlow and the brazen cheating in popular benchmarks--almost certainly would not have occurred if R3x0 were not around to establish such a commanding lead over what a normally aspirated fair-playing NV3x would have been capable of. The 7 month delay from its target shipdate, and 4 month delay from its official paper launch is not great execution, but it's not really anomalous in the GPU industry, even for Nvidia. If releasing an underpowered NV3x in the fall were a competitive option (i.e. if R300 didn't exist), it probably would have been logistically possible, just as Nvidia released the TNT1 they had promised would have the specs of the TNT2, and the GF1 that was probably intended to be more like a GF2.

And compared to the previous Nvidia product--the killer GF4--NV30 is a downright winner. It beats the GF4 easily in every benchmark, absolutely creams it in AA/AF performance, and adds a very impressive featureset. Remember when the GF1 launched losing many benches to the TNT2 Ultra? Or when the GF3 launched losing many benches to the GF2 Ultra?? None of that for the GFfx 5800. Again, the only problem is that the 9700 Pro was even faster, cheaper, had been available for 7 months, didn't sound like an F-18 during takeoff, had much better looking AA and better AA performance, had a similar advanced featureset (and arguably a more useful one), and totally creamed it when it came to actually making use of those new shaders. Oh, and didn't lie and cheat like a Nigerian spammer seeking to form a confidential business relationship.

Is ATI a One Hit Wonder?

Perhaps. (I doubt it, but that's another story.) But don't begrudge them the brilliance of their one hit.
 
Even if the pictiure is identical, the text on it isn't. So why would Kutaragi go and relabel the picture with wrong numbers? To mislead the competition? Remember this speech was given in September 2002. Even the article says Kutaragi said 1 cell = 1gflop.

I have to ask others on this forum this question. Is it possible to make a chip that has 4 general processor cores and 32 "Altivec"-like units on it, using a 65nm process, without problems like too large a die size, power, and heat?
I'm not saying it absolutely can't be done, just skeptical.
 
Vince,

I agree with the P4/K7 comparision to a degree. I think one thing to note here is that the NV30 isn't as clean as it could be, it used a lot of fixed function stuff for the legacy operations. So it's not as forward thinking as the P4 was in it's time. The Nv30 SEEMS to have some bloating problems. The P4 suffered from anorexia.

As for moving forward, I agree the generalised computing resources with intelligent scheduling, is a good thing (tm).

I personally don't think ATI is a one hit wonder. They have a broad product line up which goes beyond the average GPU and the experience that goes with it., they have a cross licensing agreement with Intel which has gained them more than a bus AFAIK and they seem to be good at making the best of a given process without having to go bleeding edge.
 
Very massive indeed... The only "nigh next generation fab process" chip I'm aware of is ths GScube (it was something like 450+ mm^2 or some huge number like that). If it was something like that you can kiss goodbye to a PS3 in 2005 since it would lose way too much money.

The ps2 was sold at $380 in japan at launch and at a loss of about $200 per unit IIRC. Even though it was mass produced, and most of it’s other components were relatively cheap.

My guess would be the ps3 could be sold at a similar pricepoint with similar or bigger losses(since they’ve been at 1 for two gen. They’ll likely be more confident.)

Is it possible to make a chip that has 4 general processor cores and 32 "Altivec"-like units on it, using a 65nm process, without problems like too large a die size, power, and heat?

It will likely be large, if the original ps2 is anything to go by.


PS
The 65nm begins ramping up early next yr, by the H2 2004 we have mass production ;)
The fab will be built in Oita, Japan in conjunction with joint venture partner Sony, with mass production slated to get underway in the second half of next year, utilizing 65nm technology.
 
Vince said:
I think this is clear, I think a sound line of reasoning can be drawn between the NV3x's TCL front-end and the future of Unified Shaders which will be composed of this very type of construct. Especially as opposed to the R300's which still caters to the discrete unit ideology.

I think there's just as clear reasoning that says that if what you're saying is correct, then nVidia did not time the market properly, or they sacrificed performance for what is in reality a "half-step" to the "new paradigm"....which in the end makes it more of a jack of all trades, master of none type part.

Might this "half-step" benefit them toward making the "full step?" Perhaps. This is assuming that nVidia made a half-step in the right direction, and not the wrong one.

I don't even see the room for argument here, not that this will stop WHQL from posting some ATI fanatic responce about 24bit color, a "pixel shader," and how ATI is feeding the starving in India.

In all seriousness, the only one that I see making melodramatic statements and blowing things out of proportion, Vince, is you.
 
Vince said:
Ok, why I propose that nVidia is the superior IHV to deliver a completed project is the following....

Thing is, Vince, nVidia was no different with respect to your arguments back in the X-Box 1 era. (nVidia having a more true fabless semiconductor model, vs. a licensing one.)

And apparently, Microsoft found out that those characteristics didn't pan out all that well, and didn't want to repeat that same mistake twice.

Thus, it's still my belief that nVidia would have been the better solution in terms of an end-product - bias withheld. Although, as has often been the case, intracorperate politic is a powerful force.

Stages of nVidia fan coping:

1) Complete Denial: "Nah---this is just a bluff for Microsoft to get nVidia, the one they really want, to cave into a better deal."

2) Technical Denial: "Surely corporate politicing....nVidia clearly has both better technology and a better business model. I see no no good technical reason for MS to choose ATI, thus, it's down to corporate politics.

3) No more progress beyond 2 is ever made. ;)
 
Status
Not open for further replies.
Back
Top