Why is x86 being dropped by Microsoft for new Xbox?

I don' think Nintendo, Microsoft or Sony have any emotional attachment to x86 so they chose what they felt would be the best platform for a game console. PPC has tons and tons of registers and it has altevic which is excellent = great floating point performance. Not to mention the owning their own IP and choosing fabs.
 
I am not surprised that MS has given up the X86 architecture for the XBox 360.
They would have never been able to own the IP and to reduce their cost over the console Lifetime.
See what happened with the first XBox.
However i was suprised to see that they have gone with this custom IBM chip.
I thought they would have chosen an even more custom chip that would have been able to run the .Net CLR in hardware and which would have other hardware optimisations for Windows Vista.(I also thought that they would used a Direct X 10 graphic card)
Then they would run a version of Windows Vista on the XBox 2 thus taking advantage of its gaming friendly nature and its DRM features.
With a Vista based XBox 2 their plan of cross platform games ( and entertainement ?) developpement would have been much easier.
Imagine games developped for Windows Vista PC running with highest settings on XBox 2 and XBox 2 games running on Vista PC.
However i guess that such a solution would have cost them a lot of money.
 
Last edited by a moderator:
duncan36 said:
I realize that x86 is some 20+ years old now, and its remarkable how addons to this system have made it perform quite well in things like 3d graphics and sound. However am I right in assuming that inefficincies in x86 make it less attractive for a new gaming console?

Basically I am interested in knowing the ins and outs of this decision and why x86 is not as good for a gaming console as a custom system. Thanks all for their input.

Simply put, unlike XBox 1 where they only had about a year to design and put together the machine, Microsoft had enough time to design a proper console CPU and GPU this time around.

If they had luxury of time with the first XBox, I don't think a Mobile Celeron 733Mhz and a slightly modified GF3Ti500 (which later basically turned into GF4Ti4200) would have ended up inside.
 
Shogmaster said:
Simply put, unlike XBox 1 where they only had about a year to design and put together the machine, Microsoft had enough time to design a proper console CPU and GPU this time around.

If they had luxury of time with the first XBox, I don't think a Mobile Celeron 733Mhz and a slightly modified GF3Ti500 (which later basically turned into GF4Ti4200) would have ended up inside.
IIRC, "off-the-shelf parts are cheaper" was the slogan at that time when PC was the uber dot-com platform and they themselves just believed it.
 
Shogmaster said:
Simply put, unlike XBox 1 where they only had about a year to design and put together the machine, Microsoft had enough time to design a proper console CPU and GPU this time around.

If they had luxury of time with the first XBox, I don't think a Mobile Celeron 733Mhz and a slightly modified GF3Ti500 (which later basically turned into GF4Ti4200) would have ended up inside.

Ok I agree, but when dealing with fanboi's one must have hard information. The Xbox had a faster processor and probably as good if not better GPU than the other consoles, what exactly about its PC roots caused it to be less than ideal for a console?
 
duncan36 said:
Ok I agree, but when dealing with fanboi's one must have hard information. The Xbox had a faster processor and probably as good if not better GPU than the other consoles, what exactly about its PC roots caused it to be less than ideal for a console?

Well, though faster, the XBox's CPU was actually 'weaker' than the EmotionEngine, FYI. What really set the XBox apart wasthe relative ease to program for and a 'real' GPU; whereas the Graphics Synthesizer didn't have any hardware T&L capabilities, for example.
 
Shogmaster said:
If they had luxury of time with the first XBox, I don't think a Mobile Celeron 733Mhz and a slightly modified GF3Ti500 (which later basically turned into GF4Ti4200) would have ended up inside.

I am not so sure that Microsoft would have used a very different GPU even if they had much more time to design the XBox.
 
Last edited by a moderator:
xbdestroya said:
Well, though faster, the XBox's CPU was actually 'weaker' than the EmotionEngine, FYI. What really set the XBox apart wasthe relative ease to program for and a 'real' GPU; whereas the Graphics Synthesizer didn't have any hardware T&L capabilities, for example.


OK now I have to hear your definition of weaker..........
And please don't start pointing out the raw flop advantage, because it's only relevant if you can actually practically use it.

In EVERY piece of significant game code I have ever seen the "mobile celeron" in xbox completly destroys the EE in performance. It's not even close.
 
ERP said:
OK now I have to hear your definition of weaker..........
And please don't start pointing out the raw flop advantage, because it's only relevant if you can actually practically use it.

In EVERY piece of significant game code I have ever seen the "mobile celeron" in xbox completly destroys the EE in performance. It's not even close.

ERP calm down there, I think you're being a little overly aggressive. ;) Indeed I was refering to the Flops. But if what you're saying is that the XBox CPU demolishes it in performance, and it's 'not even close,' well I'm not going to fight you on it because I myself don't know.

All I was attempting to convey was that the GPU in the XBox made the larger difference between the two (CPU vs GPU). If the XBox had had a Graphics Synthesizer rather than the NVidia chip, would the XBox still have the better looking games?
 
Last edited by a moderator:
It might not have better lookign games because it probably wouldn't have been able to push as much geometry. But that's just one thing you're negating here. Thee P3 in the Xbox might be much weaker in the FLOPS department but when it comes to regular old processing it is a much better CPU to work with. It's alot easier to get the most out of hardware that is easy to work with from the beginning than it is for a machine that doesn't have the best hardware for the job.

The bottom line here is that Microsoft chose the parts that it thought was best at the time. They did a rush job but they didn't skimp out on the hardware. They just decided to get off the shelf parts and not do a completely custom job for the Xbox. Doesn't make the machine all that bad, just means MS messed themselves up when it comes to reducing the costs of manufacturing.

They have resolved these mistakes in Xbox 360 and that has something to do with not going with x86. They have a lot more freedom with the architecture and will be able to reduce the price much faster and easier than it was fo rthe original Xbox.
 
Sonic said:
It might not have better lookign games because it probably wouldn't have been able to push as much geometry. But that's just one thing you're negating here. Thee P3 in the Xbox might be much weaker in the FLOPS department but when it comes to regular old processing it is a much better CPU to work with. It's alot easier to get the most out of hardware that is easy to work with from the beginning than it is for a machine that doesn't have the best hardware for the job.

The bottom line here is that Microsoft chose the parts that it thought was best at the time. They did a rush job but they didn't skimp out on the hardware. They just decided to get off the shelf parts and not do a completely custom job for the Xbox. Doesn't make the machine all that bad, just means MS messed themselves up when it comes to reducing the costs of manufacturing.

They have resolved these mistakes in Xbox 360 and that has something to do with not going with x86. They have a lot more freedom with the architecture and will be able to reduce the price much faster and easier than it was fo rthe original Xbox.


Hey now, don't get me the wrong way here - if you've read this entire thread than you know that the angle I've been pushing with regard to the move away from x86 has been primarily a cost-related one. I've never said the XBox was 'bad hardare,' on the contrary I readily admit it's the stronger system.

My getting into this whole CPU/GPU/PS2/XBox aspect was simply in reply to this comment:

Ok I agree, but when dealing with fanboi's one must have hard information. The Xbox had a faster processor and probably as good if not better GPU than the other consoles, what exactly about its PC roots caused it to be less than ideal for a console?

I was just trying to shift the majority of the 'credit' for Xbox's strength onto the GPU rather than the CPU; rightfully so as well.
 
Perhaps you are right, but not in all situations. This topic deserves its own thread, not to be polluted in here.
 
slightly off-topic:

going from Xbox to Xbox2 (360) we're seeing a complete shift in CPU architecture.

Xbox CPU: Intel X86 - CISC (mostly) - single core - single threaded - OOO (?)

Xenon CPU: IBM PowerPC - RISC - mulit-core - multi-threaded - not OOO

for the generation beyond Xbox2/360 there will probably be less of a shift in architecture, just much greater performance (larger number of beefier cores, more threads per core, more cache, some improvements)
 
Megadrive1988 said:
slightly off-topic:

going from Xbox to Xbox2 (360) we're seeing a complete shift in CPU architecture.

Xbox CPU: Intel X86 - CISC (mostly) - single core - single threaded - OOO (?)

Xenon CPU: IBM PowerPC - RISC - mulit-core - multi-threaded - not OOO

for the generation beyond Xbox2/360 there will probably be less of a shift in architecture, just much greater performance (larger number of beefier cores, more threads per core, more cache, some improvements)

Dunno about that, by the time Xbox3/PS4 come to market very large die sizes will be easily possible thanks to the push for dual/multi core industry wide, CPUs with complex OoOE would be pretty cheap in terms of die space...
 
Dunno about that, by the time Xbox3/PS4 come to market very large die sizes will be easily possible thanks to the push for dual/multi core industry wide, CPUs with complex OoOE would be pretty cheap in terms of die space...
They may not be that big by then, either. By that time, smaller process nodes will have matured. There was all the talk way back when about 65nm production for both XeCPU and CELL, but then, there was also talk further back about CELL having some ridiculous number of cores. If you can get the gaming industry to be in a position where the number of threads they would be comfortable with is far greater than what the next-gen consoles can provide, you'll have the impetus for more cores in the following generation of consoles, but I don't see the complexity of cores jumping that quickly. Main reason being that once you've driven the point home about how effective multi-core can be, you're essentially moving down the TPC line. At that point, which is easier? Putting more complex OOOE in each of your cores or allowing more ways of SMT so that TLP can cover for the lack of ILP? Putting a beefy speculative prefetcher or putting more cache? Making the branch predictor more extensive or allowing predication and relying on good compilers?

I have to question how far you can really go down either the ILP route or the TLP route in the general case. As much as you can easily push it in the server arena, that's a small and vertical market. How much multithreading and/or multitasking will the average PC user really do? Sure, it'd be nice for the serious power users, but I doubt it will do much to "heighten the Internet experience." There's a distinct lack of software to really take advantage of high TLP and most people really don't multitask enough for it to make a difference either. As for more ILP... how high can you really push it? Okay, maybe x86 is a poor choice, but even otherwise, each new addition to squeeze out a little higher IPC has to work with those preceding it. It's the equivalent of leaning boards against boards precariously balanced against each other to make a slovenly scaffolding that will enable you to climb higher than a single ramp. Okay, it works, but you're damn lucky not to break your neck and it takes a hell of a lot more of them to gain an extra inch once you've reached a certain point.
 
ShootMyMonkey said:
They may not be that big by then, either. By that time, smaller process nodes will have matured. There was all the talk way back when about 65nm production for both XeCPU and CELL, but then, there was also talk further back about CELL having some ridiculous number of cores. If you can get the gaming industry to be in a position where the number of threads they would be comfortable with is far greater than what the next-gen consoles can provide, you'll have the impetus for more cores in the following generation of consoles, but I don't see the complexity of cores jumping that quickly. Main reason being that once you've driven the point home about how effective multi-core can be, you're essentially moving down the TPC line. At that point, which is easier? Putting more complex OOOE in each of your cores or allowing more ways of SMT so that TLP can cover for the lack of ILP? Putting a beefy speculative prefetcher or putting more cache? Making the branch predictor more extensive or allowing predication and relying on good compilers?

Hard to say, if you'd go by what Andy Glew wants there is practically no limit to the resources that you can dedicate to OoOE and still get good performance increases for even a single core, his proposed K10 core was rumored to be a real monster single core very similar to the K7 or K8 in basic specs but with multiple forms of SMT, massive caches, and some sort of psuedo predictive branching mechanism a la Itanium....

The real answer probably is that no one really knows for sure or that it depends heavily on what you're going to be running on the CPU...
 
Since you mentioned it, from his CV...

Andy Glew said:
Proposed advanced microarchitecture: multithreaded, multicluster, with multilevel everything: multilevel schedulers, multilevel instruction window, multilevel store buffers, multilevel register file, and multilevel branch predictors; supporting Implicit Multithreading / Speculative Multithreading/Skipahead Multithreading (IMT/SpMT/SkMT) and lightweight user level forking.

http://www.geocities.com/andrew_f_glew/cv-glew.html
 
Yup, thats the one, supposedly he left AMD becuase they considered such a design to be unfeasible. He seems to be working for Intel now, part of the Nehalem group...


"August 30, 2004-date: Computer Architect, Intel, Hillsboro, Oregon.

*
Current: “Architecture Futuresâ€￾ team member, Nehalem Architecture Team. Intel/DEG/DAP/MAP/NAT/AF.
*
Legal/Patent work: 6 month's quarantine, Sept 2004-Feb 2005."
 
Back
Top