XBox 360 scalability

Rockster

Regular
Now that the console has launched, and Microsoft shifts its' focus to the future, do you think the Xenos (GPU) and Xenon (CPU) core architectures will be carried forward to future platforms. As they continue to tweak and enhance the tools, compilers, profilers, etc. and developers begin to develop multi-threaded code bases around these chips, and since MS owns the IP, it stands to reason they could scale the technology forward in 5 years to maybe bump the clocks a bit and deliver a 6+ core cpu and a 96+ alu gpu fairly easily. Then providing backwards compatibility wouldn't be a challenge, and they would avoid launch growing pains as developers could leverage existing experience, middle-ware, and engine technologies. Or do you feel that the pace of technology will require yet another radical shift. IMO the GPU is more likely to change than the CPU, and that bandwidth will continue to be the weak link.

It's obvious to me that Sony feels CELL will continue forward but I think the Playstation platform will be quite different in it's next incarnation. For whatever reason, I get the sense that Microsoft will lap Sony in the console space at some point in the future. Probably because of pace, and time to market. They have a history of using release schedules to overpower the competition. Much in the same way NVidia ousted 3DFx. Many times, it simply comes down to execution.
 
u can bet that next gen 360 will have a gpu that has unified shaders and multicore CPUs. I would probably think there could be a slight possibility of a PPU in next gen consoles too. I agree with you, bandwidth is still going to be the weak link next gen.

If you think about it, nothing 360, ps3, or revolution (as we know now) is radically different than previous gens. As they borrow technology from the PC space and we can see where PCs are headed in the near future (3-5 years), there shouldn't be a huge change unless we see a real breakthrough in technology (such as the introduction of a stand alone gpu).
 
Last edited by a moderator:
Rockster said:
Now that the console has launched, and Microsoft shifts its' focus to the future, do you think the Xenos (GPU) and Xenon (CPU) core architectures will be carried forward to future platforms. As they continue to tweak and enhance the tools, compilers, profilers, etc. and developers begin to develop multi-threaded code bases around these chips, and since MS owns the IP, it stands to reason they could scale the technology forward in 5 years to maybe bump the clocks a bit and deliver a 6+ core cpu and a 96+ alu gpu fairly easily. Then providing backwards compatibility wouldn't be a challenge, and they would avoid launch growing pains as developers could leverage existing experience, middle-ware, and engine technologies. Or do you feel that the pace of technology will require yet another radical shift. IMO the GPU is more likely to change than the CPU, and that bandwidth will continue to be the weak link.

It's obvious to me that Sony feels CELL will continue forward but I think the Playstation platform will be quite different in it's next incarnation. For whatever reason, I get the sense that Microsoft will lap Sony in the console space at some point in the future. Probably because of pace, and time to market. They have a history of using release schedules to overpower the competition. Much in the same way NVidia ousted 3DFx. Many times, it simply comes down to execution.


a 6 core CPU, 96 ALU GPU with increased clockspeeds would certainly be interesting in the 2007 timeframe for various non-consumer projects, but in no way would that meet the future requirments for a next-gen Xbox in ~5 years which will need at least a ~10x leap in graphics performance, and probably even more of a leap in CPU to be relavant. Xbox1 to Xbox 360 CPU floating point performance was pretty damn big: 1.4~3.0 GFLOPs to 115 GFLOPs in 4 years.
 
Last edited by a moderator:
Ppu?

dukmahsik said:
I would probably think there could be a slight possibility of a PPU in next gen consoles too.
And why a fixed function processor, that is nothing but an array of vector units used only for accelerating a hardly quantifiable, on a per game basis, process, would be include in a closed platform like a console?

A "PPU" is strictly useless in a console. It would be a pure waste of transistors to add one.
If the Console engineers want more FP power, they'd either upgrade thei CPU solution, or at worse include a Co-prossessor, that would be an array of vector units, except that it would be fully programmable and not a fixed function IC.

As one can see, the more the 3D technology, be it hardware of software, advances and the more it becomes flexible and programmable.
For instance, DX10 does not support 3D fixed functions anymore, it utilises unified shaders structure for its shaders, the newly included Geometry Shader is programmable, etc...

Honestly, It's hardly reasonable to expect the Xbox 3/PS4 to get back to the fixed function era, that is already history, in this day and age.

Sorry, that was my PPU rant of the week, let's continue the discussion. :D
 
You could scale this system pretty good. The next generation would have enough edram for a complete frame at 1080p (or more?), and like you said, just add alu's, upgrade the shaders (buffers etc.) and add cores.

BTW: Intel is aiming for 8-16 cores by 2010.

And no, I don't think another radical shift is needed.
 
Last edited by a moderator:
If MS follow through with previous plans and comments, the next XBox will be a hardware standard for PCs to be made to. ie. There won't be another XBox console. I reckon that's pretty much a certainty if XB360 doesn't do very well.
 
Shifty Geezer said:
If MS follow through with previous plans and comments, the next XBox will be a hardware standard for PCs to be made to. ie. There won't be another XBox console. I reckon that's pretty much a certainty if XB360 doesn't do very well.

I think this was the original plan. MS doesn't want to be in the hardware business. They are trying to control the API for game development. By dictating a standard for consoles and PCs, they hope to make DirectX the de facto standard. However, the 360 architecture is quite different from a regular PC, so it will be difficult for them to achieve that goal. The lack of hype for XNA that was supposed to be a first step toward a unified standard for PC and console games makes me think that they are abandoning that concept.
 
pcostabel said:
They are trying to control the API for game development. By dictating a standard for consoles and PCs, they hope to make DirectX the de facto standard.
Problem is, the xboxes don't use directx... It may be DX derivates, but 360 in particular seems quite far removed from the heavily abstracted (and code-bloated and overhead-laden) PC API. As you noted yourself, it's not really going to be possible to control game development across the board with two such differing products and dev environments...
 
pcostabel said:
The lack of hype for XNA that was supposed to be a first step toward a unified standard for PC and console games makes me think that they are abandoning that concept.

I don't think you know MS very well...
 
the next Xbox will probably have a CPU that has 20 to 40 small cores. (not necessarily PPEs) at least if you believe what Epic Games has said about their next-gen engine.

if Xbox 360 Xenon CPU has a theoretical peak performance of 115 GLOPS, the next Xbox CPU will have to reach at least 1 TFLOP if not a couple TFLOPs. I don't think MS would settle for anything less than 1 TFLOP. a higher clocked Xenon with 6 cores would most likely not achive this unless the cores are conciderably upgraded. so either a totally new design will be needed or tens of PPEs are going to be needed.

of course, I don't have the slightest clue as to what Im talking about since im not a chip architect :p
 
When I said 6 cores / 96 alus, it was to demonstrate the essense of the philosophy and not reflect the actual specs (reason for the '+' sign on the end). I have no idea what manufacturing advances will allow, 9 cores? multiple multicore chips on the same package totalling 12? whatever. I was just wondering if it's realistic to expect this basic architecture to be carried forward to the next-gen. To my knowledge, it has never happened in the past.

I agree with the comments that MS would like to get out of the hardware business, and have the industry adopt some extensible open standard hardware platform. But, J and company seem to feel that's at least a couple more generations away.
 
Megadrive1988 said:
the next Xbox will probably have a CPU that has 20 to 40 small cores. (not necessarily PPEs) at least if you believe what Epic Games has said about their next-gen engine.

if Xbox 360 Xenon CPU has a theoretical peak performance of 115 GLOPS, the next Xbox CPU will have to reach at least 1 TFLOP if not a couple TFLOPs. I don't think MS would settle for anything less than 1 TFLOP. a higher clocked Xenon with 6 cores would most likely not achive this unless the cores are conciderably upgraded. so either a totally new design will be needed or tens of PPEs are going to be needed.

of course, I don't have the slightest clue as to what Im talking about since im not a chip architect :p

It's pretty clear that we'll see large scale parallelism in the next 5 or 10 years, and a lot of "cores".

What's more interesting is predicting the memory topology. I wouldn't want to hazard a guess as to how you would arange 20 or 30 cores relative to the memory pools in 5 years time. I can see how Sony sees it being done, but I don't believe that cores with very limited local storage and DMA access to a larger pools being the way forwards for general computing.
 
I like the topic - I've thought about this several times myself in the past.

I agree with comments that MS would like to move away from hardware if possible, but with the 360's current architecture, I just can't envision how that shift would take place. Afterall everything else aside, what do you require in terms of hardware to have an x86 PC that can properly emulate a 360? (No need to answer that.) Though maybe a 360 on an add-in card would be pretty cool down the line...

I've also thought about the XBox 540 concept before as well (360 1.5 in ~2008) a little into this gen to claim the performance crown outright as well as make some tweaks and such to non-gaming functionality. This obviously would use the same architecture as 360, be fully B/C, and utilize the same API's. Devs would simply be able to 'upgrade' development to the newer system and there ya go. Microsoft certainly has the means to bring this kind of pressure to Sony, but at the same time the retail channel would probably hate it; only so much shelf space afterall. I mean, if development procedures remain the same, there's no inherent reason to stick to the five-year console cycle on a scalable architecture. But mind you I don't think they'll do this either - it would prevent the 360 hardware from readily becoming profitable in and of itself were they to release a newer system with double everything after a die shrink or two.

Bare with me that the above was just pure mental excercise.

Ok but to get down to some reality, Cell obviously is meant to scale into the future. The XeCPU - I don't see that happening for the next five-year effort. If Microsoft sees Sony as their primary competitor, more Cells will trump more XeCPU's by ever larger degrees as the gens go on, and as devs familiarize themselves with the Cell and the SPE's the advantage the Sony systems should have in the compute area should only increase. Rather, I think MS will go with a new architecture - though possibly incorporating all elements of the XeCPU as well for B/C reasons - to try and squeeze more FP power out of a chip mm^2 for mm^2. I think as the console 'wars' progress, this could be Cell's achilles heel. Scalable yes, but who's to say that in the next five years a superior architecture (again taking die area into account) won't present itself and be made available to Sony's competetitors? I think Cell is great and has a lot of potential as a long-lived architecture, but there always looms that risk of the next-best thing rendering it obsolete - scalability notwithstanding.

But once again as for the XeCPU, I just don't see it continuing on in it's present state - scaled or not - for MS' next-gen effort.
 
Last edited by a moderator:
ERP said:
I can see how Sony sees it being done, but I don't believe that cores with very limited local storage and DMA access to a larger pools being the way forwards for general computing.

I'm not quite convinced Sony even thought this was the optimal way or a way that they'd have gone if it weren't for outside limitations placed on the chip. It seems the choice of DMA/Sram pools was more of a space/transistor count concession and if they were able to target 65nm and still only require a ~200gflop chip I doubt they'd have gone with it (it likely would have been 300+m transistors though, if not more). Not sure though.

It will be interesting to see what happens in the future regardless... compilers/langauges are sure going to have to do some magic if programmers are to deal with 20+ core chips on something like a console (where a single program is expected to suck them up).
 
xbdestroya said:
Ok but to get down to some reality, Cell obviously is meant to scale into the future. The XeCPU - I don't see that happening for the next five-year effort. If Microsoft sees Sony as their primary competitor, more Cells will trump more XeCPU's by ever larger degrees as the gens go on, and as devs familiarize themselves with the Cell and the SPE's the advantage the Sony systems should have in the compute area should only increase. Rather, I think MS will go with a new architecture - though possibly incorporating all elements of the XeCPU as well for B/C reasons - to try and squeeze more FP power out of a chip mm^2 for mm^2.
If I was a console programmer I'd be happier with using extra transistors for a renewed emphasis on general purpose performance (bigger caches, OOOE, better memory access, etc.) rather than just tacking on more cores. Don't mistake implementation for the architecture, there isn't anything in their design that would preclude doing an ISA compatible processor that's much more powerful.

I also think there's going to be a limit to how many threads you can realistically utilize. If you had 14 SPEs instead of 7 what exactly is that going to mean for a game engine? Would it be better to have 7 "super" SPEs with 1MB of LS, and/or better performance characteristics even if the "peak" performance wouldn't be as high? If you could have 8 general purpose cores instead of 1 + 7 SPEs would that be better?

If I had to guess though I think the importance of increasing the processor performance is probably going to decrease as we get to the point of diminishing returns and the GPU will bulk up to add stuff like global illumination, better hardware tesselation, high order antialiasing, etc.
 
I would disagree with most of the sentiments expressed in this thread. While the trend in hardware will certainly be towards more parallelism, the architecture favored will most likely be symmetric cores like XeCPU. The reason is pretty simple: most general purpose CPUs (e.g. Intel, AMD, IBM Power line) are moving in that direction, and so most likely console CPUs will as well. For starters, there's going to be a lot more research in this area: how you build the microarchitecture, memory systems to keep them properly fed, efficient use of die space and transistors, etc. On top of that, most academic research around parallelizing algorithms target symmetric cores, and will continue to do so in the future.

The CELL is a very interesting idea, but it is an anamoly and it will not take off outside of specialized areas. It would probably not be getting much traction if not for the fact that Sony is pushing it in the PS3. The market can tolerate it now because it is in the middle of the transition away from instruction level parallelism towards thread level parallelism, and everyone is in the midst of adapting (and hence there's no entrenched mindset around that yet). But in the long run, symmetric cores will be the winner. Remember also that PC gaming, while shrinking in size, has massive mindshare among developers. Most console devs started there and still do, and thus symmetric cores will be the path of least resistance and smallest learning curve when transitioning or porting to a console.

I also believe that, despite what I said above, we will continue to see less of an emphasis on hardware and more on development tools and middleware. This has been true since almost the dawn of computing, and that trend will continue. We put a lot of emphasis on hardware here, but the reality is that development costs and time to market is more important to studios than ever before. The platform (or platforms) with the magic development tools to ease these burdens and make the whole process smoother will win massive mindshare. The console space is insulated, but not immune, to the hardware commoditization trend that the PC and Enterprise space has endured. It is taking longer here, but you can clearly see it with the rise of cross platform middleware providers and engines.
 
chachi said:
If I was a console programmer I'd be happier with using extra transistors for a renewed emphasis on general purpose performance (bigger caches, OOOE, better memory access, etc.) rather than just tacking on more cores.
General purpose performance was never a strong point in consoles - might have something to do with the fact they aren't general purpose computers at all ;)

I do see room for more focus on GPP in both consoles but I'm thinking more of targetted approach then something all encompassing - like having just one of the cores there with OOOe and high single thread performance (the cores could still be functionally symmetric regardless).

ERP said:
I can see how Sony sees it being done, but I don't believe that cores with very limited local storage and DMA access to a larger pools being the way forwards for general computing.
Well it's still better then having large arrays of incredible benchmarking general cores ;)
 
Last edited by a moderator:
Sethamin, I would have to disagree that Cell is an anomaly, firstly its an old idea from way back when ramping up the clock speeds wasn't an option (bit like now :D), secondly ideas like it ferature heavily in Intels 2015 plan.

Cell or a cell like system offers what people ideally need, you have a core set of "general" purpose processors, essentially your main code, stuff that cannot be threaded safely. Then you have your SPE type addons that give massive but limited functionality power. Cell processors running in groups (4, 8., 16...) would work well, the gains in SPEs are useful if you can fill them (say rendering?) however the PPEs also add more general power at the same time. It seems like a nicely adaptable solution to a problem, you need n cores then fit n 1-8 cells, you want more general power? fit n 2-4 cells, The problem becomes the interconnection and communication however cell style allows for expansion of the general power while providing more specialised hardware as well.

When scaling a system you can often find more small tasks that would suit SPEs or at least utilise more of them, yes there probably is a finite limit however as long as you keep expanding the tasks that are required of a computer the number of tasks will keep scaling.
 
Back
Top