Why is x86 being dropped by Microsoft for new Xbox?

Discussion in 'Console Technology' started by duncan36, Aug 9, 2005.

  1. NaMo4184

    Newcomer

    Joined:
    Jul 15, 2005
    Messages:
    12
    Likes Received:
    0
    I don' think Nintendo, Microsoft or Sony have any emotional attachment to x86 so they chose what they felt would be the best platform for a game console. PPC has tons and tons of registers and it has altevic which is excellent = great floating point performance. Not to mention the owning their own IP and choosing fabs.
     
  2. Saem

    Veteran

    Joined:
    Feb 7, 2002
    Messages:
    1,532
    Likes Received:
    6
    Transistors spent on a complicated decoder are dollars down the drain.
     
  3. Java_man

    Newcomer

    Joined:
    May 27, 2005
    Messages:
    46
    Likes Received:
    0
    Location:
    Maroc
    I am not surprised that MS has given up the X86 architecture for the XBox 360.
    They would have never been able to own the IP and to reduce their cost over the console Lifetime.
    See what happened with the first XBox.
    However i was suprised to see that they have gone with this custom IBM chip.
    I thought they would have chosen an even more custom chip that would have been able to run the .Net CLR in hardware and which would have other hardware optimisations for Windows Vista.(I also thought that they would used a Direct X 10 graphic card)
    Then they would run a version of Windows Vista on the XBox 2 thus taking advantage of its gaming friendly nature and its DRM features.
    With a Vista based XBox 2 their plan of cross platform games ( and entertainement ?) developpement would have been much easier.
    Imagine games developped for Windows Vista PC running with highest settings on XBox 2 and XBox 2 games running on Vista PC.
    However i guess that such a solution would have cost them a lot of money.
     
    #23 Java_man, Aug 10, 2005
    Last edited by a moderator: Aug 10, 2005
  4. Shogmaster

    Regular

    Joined:
    Feb 24, 2002
    Messages:
    367
    Likes Received:
    4
    Simply put, unlike XBox 1 where they only had about a year to design and put together the machine, Microsoft had enough time to design a proper console CPU and GPU this time around.

    If they had luxury of time with the first XBox, I don't think a Mobile Celeron 733Mhz and a slightly modified GF3Ti500 (which later basically turned into GF4Ti4200) would have ended up inside.
     
  5. one

    one Unruly Member
    Veteran

    Joined:
    Jul 26, 2004
    Messages:
    4,838
    Likes Received:
    167
    Location:
    Minato-ku, Tokyo
    IIRC, "off-the-shelf parts are cheaper" was the slogan at that time when PC was the uber dot-com platform and they themselves just believed it.
     
  6. duncan36

    Newcomer

    Joined:
    Aug 14, 2002
    Messages:
    173
    Likes Received:
    0
    Ok I agree, but when dealing with fanboi's one must have hard information. The Xbox had a faster processor and probably as good if not better GPU than the other consoles, what exactly about its PC roots caused it to be less than ideal for a console?
     
  7. Carl B

    Carl B Friends call me xbd
    Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Well, though faster, the XBox's CPU was actually 'weaker' than the EmotionEngine, FYI. What really set the XBox apart wasthe relative ease to program for and a 'real' GPU; whereas the Graphics Synthesizer didn't have any hardware T&L capabilities, for example.
     
  8. Java_man

    Newcomer

    Joined:
    May 27, 2005
    Messages:
    46
    Likes Received:
    0
    Location:
    Maroc
    I am not so sure that Microsoft would have used a very different GPU even if they had much more time to design the XBox.
     
    #28 Java_man, Aug 10, 2005
    Last edited by a moderator: Aug 11, 2005
  9. ERP

    ERP
    Veteran

    Joined:
    Feb 11, 2002
    Messages:
    3,669
    Likes Received:
    49
    Location:
    Redmond, WA

    OK now I have to hear your definition of weaker..........
    And please don't start pointing out the raw flop advantage, because it's only relevant if you can actually practically use it.

    In EVERY piece of significant game code I have ever seen the "mobile celeron" in xbox completly destroys the EE in performance. It's not even close.
     
  10. Carl B

    Carl B Friends call me xbd
    Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    ERP calm down there, I think you're being a little overly aggressive. ;) Indeed I was refering to the Flops. But if what you're saying is that the XBox CPU demolishes it in performance, and it's 'not even close,' well I'm not going to fight you on it because I myself don't know.

    All I was attempting to convey was that the GPU in the XBox made the larger difference between the two (CPU vs GPU). If the XBox had had a Graphics Synthesizer rather than the NVidia chip, would the XBox still have the better looking games?
     
    #30 Carl B, Aug 10, 2005
    Last edited by a moderator: Aug 10, 2005
  11. Sonic

    Sonic Senior Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,926
    Likes Received:
    130
    Location:
    San Francisco, CA
    It might not have better lookign games because it probably wouldn't have been able to push as much geometry. But that's just one thing you're negating here. Thee P3 in the Xbox might be much weaker in the FLOPS department but when it comes to regular old processing it is a much better CPU to work with. It's alot easier to get the most out of hardware that is easy to work with from the beginning than it is for a machine that doesn't have the best hardware for the job.

    The bottom line here is that Microsoft chose the parts that it thought was best at the time. They did a rush job but they didn't skimp out on the hardware. They just decided to get off the shelf parts and not do a completely custom job for the Xbox. Doesn't make the machine all that bad, just means MS messed themselves up when it comes to reducing the costs of manufacturing.

    They have resolved these mistakes in Xbox 360 and that has something to do with not going with x86. They have a lot more freedom with the architecture and will be able to reduce the price much faster and easier than it was fo rthe original Xbox.
     
  12. Carl B

    Carl B Friends call me xbd
    Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63

    Hey now, don't get me the wrong way here - if you've read this entire thread than you know that the angle I've been pushing with regard to the move away from x86 has been primarily a cost-related one. I've never said the XBox was 'bad hardare,' on the contrary I readily admit it's the stronger system.

    My getting into this whole CPU/GPU/PS2/XBox aspect was simply in reply to this comment:

    I was just trying to shift the majority of the 'credit' for Xbox's strength onto the GPU rather than the CPU; rightfully so as well.
     
  13. Sonic

    Sonic Senior Member
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,926
    Likes Received:
    130
    Location:
    San Francisco, CA
    Perhaps you are right, but not in all situations. This topic deserves its own thread, not to be polluted in here.
     
  14. Carl B

    Carl B Friends call me xbd
    Legend

    Joined:
    Feb 20, 2005
    Messages:
    6,266
    Likes Received:
    63
    Agreed.
     
  15. Megadrive1988

    Veteran

    Joined:
    May 30, 2002
    Messages:
    4,723
    Likes Received:
    242
    slightly off-topic:

    going from Xbox to Xbox2 (360) we're seeing a complete shift in CPU architecture.

    Xbox CPU: Intel X86 - CISC (mostly) - single core - single threaded - OOO (?)

    Xenon CPU: IBM PowerPC - RISC - mulit-core - multi-threaded - not OOO

    for the generation beyond Xbox2/360 there will probably be less of a shift in architecture, just much greater performance (larger number of beefier cores, more threads per core, more cache, some improvements)
     
  16. mesyn191

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    104
    Likes Received:
    1
    Location:
    Lake Forest, CA
    Dunno about that, by the time Xbox3/PS4 come to market very large die sizes will be easily possible thanks to the push for dual/multi core industry wide, CPUs with complex OoOE would be pretty cheap in terms of die space...
     
  17. ShootMyMonkey

    Veteran

    Joined:
    Mar 21, 2005
    Messages:
    1,177
    Likes Received:
    72
    They may not be that big by then, either. By that time, smaller process nodes will have matured. There was all the talk way back when about 65nm production for both XeCPU and CELL, but then, there was also talk further back about CELL having some ridiculous number of cores. If you can get the gaming industry to be in a position where the number of threads they would be comfortable with is far greater than what the next-gen consoles can provide, you'll have the impetus for more cores in the following generation of consoles, but I don't see the complexity of cores jumping that quickly. Main reason being that once you've driven the point home about how effective multi-core can be, you're essentially moving down the TPC line. At that point, which is easier? Putting more complex OOOE in each of your cores or allowing more ways of SMT so that TLP can cover for the lack of ILP? Putting a beefy speculative prefetcher or putting more cache? Making the branch predictor more extensive or allowing predication and relying on good compilers?

    I have to question how far you can really go down either the ILP route or the TLP route in the general case. As much as you can easily push it in the server arena, that's a small and vertical market. How much multithreading and/or multitasking will the average PC user really do? Sure, it'd be nice for the serious power users, but I doubt it will do much to "heighten the Internet experience." There's a distinct lack of software to really take advantage of high TLP and most people really don't multitask enough for it to make a difference either. As for more ILP... how high can you really push it? Okay, maybe x86 is a poor choice, but even otherwise, each new addition to squeeze out a little higher IPC has to work with those preceding it. It's the equivalent of leaning boards against boards precariously balanced against each other to make a slovenly scaffolding that will enable you to climb higher than a single ramp. Okay, it works, but you're damn lucky not to break your neck and it takes a hell of a lot more of them to gain an extra inch once you've reached a certain point.
     
  18. mesyn191

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    104
    Likes Received:
    1
    Location:
    Lake Forest, CA
    Hard to say, if you'd go by what Andy Glew wants there is practically no limit to the resources that you can dedicate to OoOE and still get good performance increases for even a single core, his proposed K10 core was rumored to be a real monster single core very similar to the K7 or K8 in basic specs but with multiple forms of SMT, massive caches, and some sort of psuedo predictive branching mechanism a la Itanium....

    The real answer probably is that no one really knows for sure or that it depends heavily on what you're going to be running on the CPU...
     
  19. ShootMyMonkey

    Veteran

    Joined:
    Mar 21, 2005
    Messages:
    1,177
    Likes Received:
    72
    Since you mentioned it, from his CV...

    http://www.geocities.com/andrew_f_glew/cv-glew.html
     
  20. mesyn191

    Newcomer

    Joined:
    Jul 23, 2005
    Messages:
    104
    Likes Received:
    1
    Location:
    Lake Forest, CA
    Yup, thats the one, supposedly he left AMD becuase they considered such a design to be unfeasible. He seems to be working for Intel now, part of the Nehalem group...


    "August 30, 2004-date: Computer Architect, Intel, Hillsboro, Oregon.

    *
    Current: “Architecture Futuresâ€￾ team member, Nehalem Architecture Team. Intel/DEG/DAP/MAP/NAT/AF.
    *
    Legal/Patent work: 6 month's quarantine, Sept 2004-Feb 2005."
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...