TSMC to make chips for next Xbox

There has been a rumour that the Xbox2 will use three dual core CPUs. This seemingly unlikely rumour (weird number of CPUs, cost and board complexity) has been accepted as "true".
And maybe it is, since noone who might be in the know has brushed it aside. No gainsayers, just silence. Indeed John Carmack is on record speaking of next generation consoles being multi-CPU, and just about everyone figures he was talking about the Xbox2. While the architecture of the PS3 is somewhat public due to the patent, the Xbox2 is only known by the deals struck between companies, unless you are under NDA. But the current buzz definitely indicates some kind of multiple processor setup.

I think Xbox 2 will have 2, 4 or 8 cores in some sort of MCM configuration

The 6 cores, from 3 duel core CPUs, is indeed an odd number.
 
Entropy said:
bbot said:
Qroach:

Will Xbox2 really have multiple cpus?
While I'm not Qroach, I'll chip in anyways.
There has been a rumour that the Xbox2 will use three dual core CPUs. This seemingly unlikely rumour (weird number of CPUs, cost and board complexity) has been accepted as "true".
And maybe it is, since noone who might be in the know has brushed it aside. No gainsayers, just silence. Indeed John Carmack is on record speaking of next generation consoles being multi-CPU, and just about everyone figures he was talking about the Xbox2. While the architecture of the PS3 is somewhat public due to the patent, the Xbox2 is only known by the deals struck between companies, unless you are under NDA. But the current buzz definitely indicates some kind of multiple processor setup.

IMHO it is ironic that architectural innovation in consumer space comes in the severely cost constrained console segment rather than in PCs. Goes to show just how stifling the Wintel hegemony truly is.

A while back, the inquirer reported on Microsoft designing their own CPU. Rival Sun designed a Java-styled CPU with MAJC, and since then Microsoft has got the .net thing rolling. Why wouldn't Microsoft want a CPU tailored for the .net enviroment is what I've always asked myself. I doubt Microsoft is going to design a CPU from the ground up, but I could see them going to IBM and asking for them to take the POWER CPU design and customize it into something like MAJC, but optomized to run in a .net world.
 
Never heard of the Sun designed chip before. Does it run byte code natively or does it just have optimizations for Java?
 
gurgi said:
Never heard of the Sun designed chip before. Does it run byte code natively or does it just have optimizations for Java?

It's designed for a multi-threaded enviroment.

http://www.arstechnica.com/cpu/4q99/majc/majc-1.html

IBM has plenty of VLIW experience, so if they wanted to implement something along those lines they could easily pull it off. I'm not sure if VLIW falls under IBM's POWER umbrella.

http://arstechnica.com/wankerdesk/3q02/playstation3.html

Hanibal even brings up that CELL and MAJC have a lot of similarities.
 
Brimstone said:
A while back, the inquirer reported on Microsoft designing their own CPU. Rival Sun designed a Java-styled CPU with MAJC, and since then Microsoft has got the .net thing rolling. Why wouldn't Microsoft want a CPU tailored for the .net enviroment is what I've always asked myself. I doubt Microsoft is going to design a CPU from the ground up, but I could see them going to IBM and asking for them to take the POWER CPU design and customize it into something like MAJC, but optomized to run in a .net world.

And of course quite proprietary, but probably licenseable.
Still, I'm not sure I can fully understand why Microsoft would turn to IBM for their Xbox CPU(s).
In the Microsoft-Intel relationship, Microsoft is definitely in the stronger position. Windows boxen today are products of clerical use. Their grip on the market is mainly due to compatibility with various office and administrative software, making them difficult to replace in public and private administrations worldwide. It is hard to see Microsoft make any move that would threaten that position.

However, the architecture of these Wintel boxes aren't necessarily optimal in other market segments (or even for clerical use these days!). That much is also obvious. But up to this point, Microsoft has always offered software platforms that work nicely on Intel hardware. This was the case both with WindowsCE for PDAs, and their new Smartphone platform.

So why IBM for the Xbox2? I find it hard to believe that IBM could really offer that much more in terms of the processing power/cost/power draw triad.
So why?
Was it only a political move to keep Intel in line, telling them in no uncertain terms who calls the shots in the market, without taking any risks in the traditional PC/light server spaces?
Or is it a play for higher stakes, to achieve a situation where Microsoft ultimately can control and rake in revenues from more tiers in the computing world, and from hardware and software both?

The threat of the latter is the justification for the first hypothesis, of course. But it isn't totally inconceivable. Microsoft has a virtual monopoly as it is - they can't increase revenues drastically without also drastically raising prices. But the pressure against the Microsoft tax is mounting, not least from the administrations that constitute their stronghold. Thus, to increase revenues, they have to look for new sources. Smartphones is an obvious market to pursue, but what other leads are they following?

If I still live by then, I'll read Ballmers memoirs with great interest.
 
Entropy said:
Brimstone said:
A while back, the inquirer reported on Microsoft designing their own CPU. Rival Sun designed a Java-styled CPU with MAJC, and since then Microsoft has got the .net thing rolling. Why wouldn't Microsoft want a CPU tailored for the .net enviroment is what I've always asked myself. I doubt Microsoft is going to design a CPU from the ground up, but I could see them going to IBM and asking for them to take the POWER CPU design and customize it into something like MAJC, but optomized to run in a .net world.

And of course quite proprietary, but probably licenseable.
Still, I'm not sure I can fully understand why Microsoft would turn to IBM for their Xbox CPU(s).

Off the top of my head, IBM is trying to get its semiconductor fabrication technology more wide spread. AMD, Sony, Toshiba, Samsung, Charter, and Infenion all use it. Microsoft has a lot of choices on how its IBM designed base chips get created. Intel doesn't license out its semiconductor technology as far as I know.

Microsoft will have intrested parties like Samsung that would want to license the X-Box 2 hardware and create a set top box. Or if another consumer electronics company wants in on the action, they'll have a few fabs to choose from to get hardware made.
 
I think Xbox 2 will have 2, 4 or 8 cores in some sort of MCM configuration

The 6 cores, from 3 duel core CPUs, is indeed an odd number.
[/quote]

Theres no reason why the number of cores need be a power of two, or even even. There's no benefit from a software architecture standpoint to a particular configuration.

Just because a lot of things in computing come in powers of two doesn't mean it's required.[/quote]
 
Joe DeFuria said:
Vince said:
We're quickly nearing the point at which moving to 65nm will only buy you added monetary savings over 90nm as Joe stated.

Well, yeah, but if that's the case then the answer to this:

JVD - the question then is, how does that compare to the potential N5 and PS3 which were designed with the large on 65nm, shift to 45nm mentality?

....Is that it should compare very well.

Hey Joe, sorry I didn't respond sooner. The Homosexual discussion has had been occupying me time here, you know how it goes.

Um, regarding this, if you wouldn't take my quotes out of context, you'd see that the statements are reconcilable and actually selfsupporting. Lets me clearify, we're nearing the point in absolute time (eg. 2H2004) in which you need to have your designs finalized, we all know this. So, even if you design large for 90nm and then shift to 65nm, it will still not give parity with say, STI or IBM/Nintendo if they design for 65nm large, then down to 45nm. Of which there's much circumstantial evidence of wrt STI. And this is what JVD was implying when he stated:


JVD said:
will a .65 chip make that much of a diffrence.

THey could allways jsut make an exepsnive vpu for those few months before the .65 drop
 
Some articles are saying that the manufacturing deal is strictly limited to the gpu, and not the cpu. I tend to agree, although I'd like to be absolutely sure about it. Can someone in the know enlighten us?
 
Um, regarding this, if you wouldn't take my quotes out of context, you'd see that the statements are reconcilable and actually selfsupporting. Lets me clearify, we're nearing the point in absolute time (eg. 2H2004) in which you need to have your designs finalized, we all know this. So, even if you design large for 90nm and then shift to 65nm, it will still not give parity with say, STI or IBM/Nintendo if they design for 65nm large, then down to 45nm. Of which there's much circumstantial evidence of wrt STI. And this is what JVD was implying when he stated:

Well a big hot 90nm chip vs a big hot 65 nm chip. Which both were meant for smaller processes . I think they will compare fine .

The shift from 90-65 nm is supposed to be small . Mostly size , power and cooling related which doesn't mean you will get a much faster chip on 65 nm. It will just be smaller and cooler running
 
jvd said:
Well a big hot 90nm chip vs a big hot 65 nm chip. Which both were meant for smaller processes . I think they will compare fine .

The shift from 90-65 nm is supposed to be small . Mostly size , power and cooling related which doesn't mean you will get a much faster chip on 65 nm. It will just be smaller and cooler running

What? JVD, I do believe you are very mistaken. Compare fine? You need to stop listening to Deadmeat my friend, we're talking high-preformance, set-piece ICs built for graphic acceleration. Forget clockspeed for now, the preformance will come from concurrency - end of story.

TI expects the shift from 90nm -> 65nm to double their tranistor density. Double... thats huge when you're talking about this kind of achitecture. Especially when you're talking about a device with upwards of 50-100M gates. Theoretically, thats like adding two NV40s worth of room to the design - and I'm talking the 200M+ transistor version.

And, correct me if I'm wrong, but the stepping down to 45nm is just as large. These are full process steps, akin to going from 180nm (NV15 - 25M tranistors) -> 130nm (NV40 - 175M).
 
TI expects the shift from 90nm -> 65nm to double their tranistor density. Double... thats huge when you're talking about this kind of achitecture. Especially when you're talking about a device with upwards of 50-100M gates. Theoretically, thats like adding two NV40s worth of room to the design - and I'm talking the 200M+ transistor version.

They expect ?

They don't know ?

Intel expected 90nm prescotts to clock much higher than the 130nm willimate . But not only did it run hotter , it also clocked lower and they needed to do revisions before it was even released to get 3.2 ghz. Which is what the willimate clocked at .

Nvidia went from 150nm to 130nm and they released 1 part at their targeted clock speed . While ati using a less advance part created a faster part.

Yes mircon process is important and can help but its not teh be all end all of a design.

In fact it can hinder as much as it can help
 
Yes mircon process is important and can help but its not teh be all end all of a design.

In fact it can hinder as much as it can help

Just take a look at the 5800 FX, it was definetly hindered by the micron shift.
 
jvd said:
They expect ?

They don't know ?

Intel expected 90nm prescotts to clock much higher than the 130nm willimate . But not only did it run hotter , it also clocked lower and they needed to do revisions before it was even released to get 3.2 ghz. Which is what the willimate clocked at .

Nvidia went from 150nm to 130nm and they released 1 part at their targeted clock speed . While ati using a less advance part created a faster part.

Yes mircon process is important and can help but its not teh be all end all of a design.

In fact it can hinder as much as it can help

Did you not believe a single thing I wrote? Clockspeed is basically irrelevent, forget about it for now and worry about it later since there are so many variables controlling it. Besides, the case I'm making (and have made all the way back to before the discussion with Dave and Joe) is that the future will be in computational resources en masse which allow for shaders or, depending on the implimentation, just about anything. This is a parallel operation and will be bounded by the amount of logic you can squeeze into a given area. Logic denisty is everything in that it defines the absolute upperbound on preformance - thats it.

Do you realize that, just to illuminate a What If, if I design a 'big' 90nm IC with, say, 500M transistors looking forward to cost normalization at 65nm; my competitor whose designing at a base 65nm can design a Billion transistsor IC looking towards 45nm. And what's most shocking to me is that this is for the GPU fabrication -- the area I expected MS to gain the most relative ground verse the PS3 in.

Yet, you continue to talk about clockspeed and the same, tired, arguments about nVidia and ATI. Clockspeed is going to be influenced by not only implimentation details and construct design, but process technology (like Low-K or SOI) and so many variables, you can't make the argument you are.

And, yes, "they expect" as they're not in commercial production of 65nm, nor is anyone. Do the numbers show it? Yes. Does the Sample 65nm logic confirm it? Most definitly. Are they in production of a certified process? Not yet. They are only being prudent.... hint.
 
Vince said:
Hey Joe, sorry I didn't respond sooner. The Homosexual discussion has had been occupying me time here, you know how it goes.

No probs... ;)

Um, regarding this, if you wouldn't take my quotes out of context,

I'm not taking anything out of context. I'm asking for a clarification.

Lets me clearify...

That's better. ;)

...we're nearing the point in absolute time (eg. 2H2004) in which you need to have your designs finalized, we all know this. So, even if you design large for 90nm and then shift to 65nm, it will still not give parity with say, STI or IBM/Nintendo if they design for 65nm large, then down to 45nm.

I agree. It won't be "parity". The question is how much of a difference?

However, what I thought you also were implying, was that going down to 90 nm and below doesn't seem to be offering as much "benefit" with every die shrink as it has in the past. (For example, current leakage being a problem hindering designs, etc.).

In other words, designing for 65 nm may not offer all that much more absolute benefit than designing for 90 nm. IMO the jury's out on that one, but it seems to me that the "next" advanced process isn't bringing with it all the immediate and readilty relaized benefits that improved processes have in the past.
 
it seems R500 for PCs might be coming a bit sooner than recently thought (which was fall/late 2005) it now seems R500 might be coming as early as spring 2005. with VS/PS 3.0 and not 4.0

http://www.beyond3d.com/forum/viewtopic.php?t=11318


Xbox 2 is likely to have some variant of VS/PS 3.0 instead of VS/PS 4.0
if it is true that R500 will be shader 3.0 instead of 4.0

that would fall in line with ATI's comments that Xbox 2 and N5 would both have DX9 level shaders. (3.0 or 3.0+)
 
Do you realize that, just to illuminate a What If, if I design a 'big' 90nm IC with, say, 500M transistors looking forward to cost normalization at 65nm; my competitor whose designing at a base 65nm can design a Billion transistsor IC looking towards 45nm. And what's most shocking to me is that this is for the GPU fabrication -- the area I expected MS to gain the most relative ground verse the PS3 in.
well if you want to get into . All we know is that the xbox 2 gpu will be done at tmsc and they should have 90nm or perhaps an early 65nm .

We know nothing about the cpu for the xbox and process it will be on. Which makes this whole arguement moot.

Yet, you continue to talk about clockspeed and the same, tired, arguments about nVidia and ATI. Clockspeed is going to be influenced by not only implimentation details and construct design, but process technology (like Low-K or SOI) and so many variables, you can't make the argument you are.

Same tired arguements ?

Nvidia used a smaller micron process .

They had horrible yields and needed a higher clock speed to put a similar performing part on the market compared to ati .

Not only that but the nvidia part had a bigger transistor bugedt than ati.

So in conclusion . A smaller micron process for nvidia while being able to use more transistors and get a cooler performing chip which in theroy should allow it to clock higher acomplished none of those things .

which means at 65nm the cell chip may not clock to reach thier goals . May run extremly hot and may not over a performance increase of what they are expecting over other chips .

And, yes, "they expect" as they're not in commercial production of 65nm, nor is anyone. Do the numbers show it? Yes. Does the Sample 65nm logic confirm it? Most definitly. Are they in production of a certified process? Not yet. They are only being prudent.... hint.

Yes they expect it to perform like that . Do they expect it to perform like that right off the bat .

Well i doubt it .
 
Back
Top