Interview: Nick Baker on the XBox 360 Architecture

pipo

Veteran
Nick is a hardware engineer and Director who leads the team that thought up the XBox 360 hardware architecture. It's an impressive piece of machinery. In fact, Nick recently won the Outstanding Technical Leadership award for the effort. Here, Nick takes us through the design history and some of the implementation details of the XBox 360. What were some of the design trade-offs? How different is the XBox 360 that you can buy today from what you, Nick and his team were initially thinking?

Some interesting and funny stuff in there. Check it out: http://channel9.msdn.com/ShowPost.aspx?PostID=310732
 
Wow, they were planning a single x86 core with 8-16 mini array cores for floating point. I think this verifies that a PPC CPU was never a first choice for MS, considering their history with x86. The interviewer is a kiss ass, keeps fluffing this guy instead of asking real questions.

Herb Sutter from MS explains the multi-core issues better, and he a software guy.

http://www.gtw.ca/publications/concurrency-ddj.htm

Wow, the guy asking the questions is getting painful to listen to "as hardware evolves, you will have to update your OS.." Dude, it's a console. :???:

I'll watch the rest later if I can stand to.
 
Wow, they were planning a single x86 core with 8-16 mini array cores for floating point. I think this verifies that a PPC CPU was never a first choice for MS, considering their history with x86. The interviewer is a kiss ass, keeps fluffing this guy instead of asking real questions.

Herb Sutter from MS explains the multi-core issues better, and he a software guy.

http://www.gtw.ca/publications/concurrency-ddj.htm

Wow, the guy asking the questions is getting painful to listen to "as hardware evolves, you will have to update your OS.." Dude, it's a console. :???:

I'll watch the rest later if I can stand to.


It's late and I'm tired, so maybe I'm overlooking something, but what's the issue with the bolded statement?

Also, software developers have to be very familiar with concurrency, especially now that multiple core processors are commonplace. So if you were wondering why a software guy was able to give a good explanation, that's the reason. If you weren't wondering, and I simply misunderstood your tone, then ignore this.
 
Wow, they were planning a single x86 core with 8-16 mini array cores for floating point. I think this verifies that a PPC CPU was never a first choice for MS, considering their history with x86. The interviewer is a kiss ass, keeps fluffing this guy instead of asking real questions.

I didn't think the guy was a "kissass". Nothing wrong with being nice and quite frankly, Nick and his team did a pretty amazing job imo, nothing wrong with saying it. But yeah, I will agree he seemed to have no clue sometimes.

He explained the X-86 thing in the interview, maybe you didn't watch that far. Basically top two priorities were 2005 and reasonable cost. As he said, those drove a lot of the decisions, and one X-86 core became too expensive compared to multiple smaller cores.

Well, that's the company line anyway, still have to wonder if Intel alledgedly refusing to give up IP rights for manufactoring had anything to do with it as has been stated in the past.

Also interesting how early they start these thing. He said Jan 2002 they were working on 360. I'm betting some exploratory work on Xbox 3 is going on right now (though Peter Moore has tossed out a possible 2011-12 time frame for next Xbox).
 
It's late and I'm tired, so maybe I'm overlooking something, but what's the issue with the bolded statement?

The guy being interviewed answered the bolded part after the question. The Hardware doesn't evolve period.

I watched about half of the interview, it was painful considering the wasted opportunity.
 
yeah the point was actually , that hardware does evolve (witnes the ELITE) and thats why he asked that.
you need new drivers or something for the HDMI output? , harddrive changed to 120 GB, you need some changes in os level for that.
 
yeah the point was actually , that hardware does evolve (witnes the ELITE) and thats why he asked that.
you need new drivers or something for the HDMI output? , harddrive changed to 120 GB, you need some changes in os level for that.

I'm pretty sure you could handle these (very small) evolution steps completely in software - from the beginning (see PS3).
 
The interview made me think about future consoles. Nick has a point when he questions the need for 80 core processors (or even 10 cores for that matter).

Do you think there will be a good software solution / programming paradigm in time for the next generation of multi-core consoles to tackle this problem? Would there be a good alternative?
 
I'm pretty sure you could handle these (very small) evolution steps completely in software - from the beginning (see PS3).

yeah i know.. but the interviewer said he is not from the console world (?) so it sounds logical for him to ask it
 
Do you think there will be a good software solution / programming paradigm in time for the next generation of multi-core consoles to tackle this problem?
Yes. Software engineers haven't really created the tools because everyone's been running monolithic cores. Now that multicore is out there, the tools will be developed. At these early stages with PS3 we already have SPURs which manages jobs across cores automatically. AFAIK if you were to run the same code incorporating SPURs on a Cell with 20 SPEs, the workload would get spread around. That was also the original premise of distributed processing Sony had with Cell. The had this graphs showing tasks spread around connected Cells.

By PS4, devs will be familiar with managing 7 cores, asymmetric cores, and will have had lots of ideas about how to manage them and distribute computing. Even if management of Xenon doesn't scale so well, probably because with the more cores there'll have to be better memory management to feed them, the tools MS develops will (fingers crossed!) be forward looking so the devs aren't as lost as going into this gen.
Would there be a good alternative?
No. That's why they're doing it! The choice will be how many cores of what complexity. There won't be an option for monolithic cores. Unless they develop a whole new chip technology, which is improbable for the immediate future, especially for consoles.
 
Yes. Software engineers haven't really created the tools because everyone's been running monolithic cores. Now that multicore is out there, the tools will be developed. At these early stages with PS3 we already have SPURs which manages jobs across cores automatically. AFAIK if you were to run the same code incorporating SPURs on a Cell with 20 SPEs, the workload would get spread around. That was also the original premise of distributed processing Sony had with Cell. The had this graphs showing tasks spread around connected Cells.

SPURs don't solve the problem. It's just a job queue. The developer is tasked with breaking the problem down into independent subtasks that can then be scheduled across the array of SPEs.

What Nick is talking about is the lack of programming languages to directly express parallel problems and the tools to automatically implement parallel algorithms.

*All* mainstream programming languages are sequential. Another kettle of fish is debugging

Cheers
 
What Nick is talking about is the lack of programming languages to directly express parallel problems and the tools to automatically implement parallel algorithms.

*All* mainstream programming languages are sequential. Another kettle of fish is debugging

But surely this is just a matter of time? It seems to me that with experience developers/the industry will learn more ways of effectively breaking down problems into parrallelised solutions and as more time is spent doing so, new methods of enhancing and optimising these techniques will grow and evolve..

Debugging tools again is a matter of developers waiting on companies like MS and SN Systems to provide them with the IDEs (or build there own tools) that support more comprehensive parrallel-processing analysis since such tools will become more and more relevant in all areas of software engineering and not just in games development..

Also maybe i'm wrong, but isn't the power of distributed computing based on the model that you can build a software system which spreads the workload across all/any available processing cores at run-time and therefore removing at potential issues with scalability (i.e. there shouldn't be any further problems introduced going from a quad-core architecture to a 80-core one, other than the issue of processor utilisation which depends on how many tasks need to be scheduled and how far the engineer can break down jobs into the parrallelisable tasks in the first place..)..?
 
SPURs don't solve the problem. It's just a job queue.
Yes, but it's a step in the right direction. This time last year, devs didn't even have that to help with their multicore programming. There will be new tools, new aids to understanding, and new techniques.

As for new languages, I don't think that's necessary. Existing multicore/parallel processing can already extract excellent parallelism from sequential languages. Even though you may want to be running things concurrently, the things you are running are themselves a sequence of instructions, running on sequential cores. As long as developers can partition the workload into multiple sequential programs, multicore should work just fine. And simpler tools than parallel languages should be enough to get that working. The creation of advanced languages should enable ideas to be expressed in code that the multicore architectures can distribute and process themselves. That may be nice and easy for the devs if they can get their heads around designing parallel programs, but it isn't essential to using multicore architectures.
 
Also maybe i'm wrong, but isn't the power of distributed computing based on the model that you can build a software system which spreads the workload across all/any available processing cores at run-time and therefore removing at potential issues with scalability (i.e. there shouldn't be any further problems introduced going from a quad-core architecture to a 80-core one, other than the issue of processor utilisation which depends on how many tasks need to be scheduled and how far the engineer can break down jobs into the parrallelisable tasks in the first place..)..?

Yup. But that's only the theory. Even MS says Vista doesn't scale at 8+ cores.

The problem is it only works if you can break up your tasks in very small pieces. It just doesn't make sense for a lot of software.
 
Yes, but it's a step in the right direction. This time last year, devs didn't even have that to help with their multicore programming. There will be new tools, new aids to understanding, and new techniques.

As for new languages, I don't think that's necessary. Existing multicore/parallel processing can already extract excellent parallelism from sequential languages.

No, as you state yourself: Developers are excellent at extracting parallism at a high level and implement each subtask in a sequential language. Existing programming languages are absolutely pants at extracting parallism. What Nick was talking about was a programming language that allows you to express parallism at various levels, something more friendly (expressive) than Occam or CSP variants of C/C++.

A large part of it, is driving such a beast into the mainstream so that you reach a point where most developers can use it instead of the 5-10% of developers that currently can be tasked with parallel programming without fucking it up.

Addendum: I'm really discussing this in a general software engineering context. Game development is a slightly different beast because there is more focus on performance and workloads are more predictable.

Cheers
 
Last edited by a moderator:
Yup. But that's only the theory. Even MS says Vista doesn't scale at 8+ cores.

The problem is it only works if you can break up your tasks in very small pieces. It just doesn't make sense for a lot of software.

True..

But a video game is different to an OS and in video games you're dealing with a virtual world as a representation of the real world that is inherently parrallel.. As a result you can build the basis of your game to support 8 threads for example, with each thread dealing with a specific subsystem of your game.. with this setup, adding more cores would help only up to the point where at any given time a thread is ready to be dispatched to a processor, there is always at least one processor available..

Now adding more cores won't speed up your application at all.. However since your making a video game then the nature of your hardware utilisation isn't fixed and you can pretty much always find a use for all those extra cores.. It isn't just about breaking down tasks (until they can't be broken down any further) into parrallelisable threads but also adding new sub-systems to utilise the extra hardware and can really help enhance the representation of the game world.. If you have ten cores and your processing 4 million polys on them at a point where adding an extra core won't allow you to process those 4 million polys any faster, then just bump up the number of polys to process which should then automatically re-scale the distribution to more adaquately utilise the hardware available..

It's easy to look at an 60/70/80/100 core architecture and ask "why do we need these many cores?" from a system design perspective, however when your building fast, computationally heavy visualisation and simulation systems to encapsulate a "model reality" you can pretty much always find ways to increase your intended processing load to improve the "realism" of that "model reality" and thereby put to use any/all available hardware processing cores..

It just takes a little imagination.. (& bucket loads of time/money/hard work..) ;)
 
Gubbi; said:
I'm really discussing this in a general software engineering context...
Well that's quite a different argument to Pipo's query about tools for next-gen consoles, which is what I was talking about (and not Nick's comments on parallelism). But still, if multicore happens, tools will happen. Inevitably. Intel and AMD and IBM will create their own if there's no universal standards, because they are creating these processor and they will need development tools to work them! I can't envisage a future where, unlike the past 30+ years of computer developments, as one area of technology advances the rest stops abruptly. If nothing else (such as university RnD), competition will drive development of tools. The CPU manufacturer who provides the tools to extract easy parallelism from their CPU is the one who shows massive performance/cost gains. It's been the same with GPUs - the IHVs have developed the tools to drive the hardware, developing shader languages, debuggers, analysis, etc. MS must also be feverously beavering away, both at the chance of selling developers the next big thing in development languages, and giving customers a reason to upgrade to a new OS - one that can use their multitudes of cores in future PCs.
 
It was a nice interview. Somewhat casual. It was interesting to hear about the multicore approach. It'll certainly be interesting to see what they decide upon for the next console. (Gotta rewatch that bit. I can't remember if he said anything about just adding more of the same cores complemented by the usual speed boost possible through smaller transistors).

I watched about half of the interview, it was painful considering the wasted opportunity.

They must have gone over questions pre-interview that wouldn't get any sort of answer due to NDA.
 
Back
Top