Xbox360 Uncloaked eBook

Alstrong said:
Makes me wonder what CPU they'll use in the future. My memory is a little hazy, but it seems like the only consoles that will have relatively common hardware architectures are the GCN and Wii. It would be "smart" for MS to follow a similar path there, and perhaps add in OOOE to the design specs. But then I wonder if they will turn out to follow a Cell-like asymmetry that AMD is also kind of planning with co-processors instead of duplicate cores. I guess that all depends on the next set of surveys on what developers want.

that will be interesting to watch.

I looked back over the book and one of the options that they considered for 360 was an Intel processor (single core) with several co-processors.

As to Vista cross-platforming, I don't recall seeing that mentioned in there. Maybe someone else can correct me?
 
Hmm, I used to think that the 360 cpu was a bad choice (except for cost), but thinking about how it's used now, it seems slightly better than I thought, especially cost wise.

In about the size of a modern day cpu, they have 3 cores each probably delivering around the performance of a 2ghz to 2.4ghz pentium 4. That's enough to give about 60fps in nearly all recent pc games (and more than enough for beyond that, considering that many console games target 30fps), and so far pc games have been the model for xbox 360 games.
Then you have two other cores to help out with other game stuff, and they're cheaper to have than additional discrete hareware. The 2nd core can do additional graphics stuff (perhaps hidden surface removal, or maybe some physics), and then the 3rd core can handle something like audio, and much higher quality audio than could be had out of any sound blaster, with dolby digital encoding. It delivers good enough cpu, plus all the additional hardware needed in a console, while keeping costs down and only sacrifices on the core cpu abilities in order to better do the stuff that would normally go to discrete logic. So not only is the actual design and production cheaper for microsoft, but it saves them from needing additional chips that can only serve one purpose.
 
This is a very good thread :)

I'm learning tid bits of interesting info about the 360 and the politics behind it...

thank you for the link I'll definitely buy the book!!!
 
I also find the book fairly interesting (not finished it yet, though), but there also is plenty of repetition to be had (probably to facilitate reading individual chapters).
 
[maven] said:
I also find the book fairly interesting (not finished it yet, though), but there also is plenty of repetition to be had (probably to facilitate reading individual chapters).

I agree. Especially when you more or less read it all in one go.

It's not so bad when you're reading a chapter a day or so, since there are a lot of facts and names involved in the story... Also, since a lot is happening at the same time, the book jumps back and forth in time, but there's not much Dean could have done to prevent that I guess.

Great read. Found 3 minor errors by the way. ;)

Edit - There's also some PS3 stuff in there. Apparently Ken's first design was build around two Cell CPU's (no dedicated GPU!), after that they wanted to try a new version of the PS2 technology but in the end they went for a 'PC' solution (mostly because of the shader technology they didn't have).
 
Last edited by a moderator:
Tap In said:
The lack of quality GDDR3 memory was evidently the cause of the bottleneck in production. Besides a lack of sufficient quantity last year, they had to take the Samsung parts and test them for speed on the factory floor before installation.

Just a small thing, the Samsung chips were fine, the problem was with the Infineon chips.

And the PGR3 team discovered the problem in one of their dev kits by the way.
 
Alstrong said:
Makes me wonder what CPU they'll use in the future. My memory is a little hazy, but it seems like the only consoles that will have relatively common hardware architectures are the GCN and Wii. It would be "smart" for MS to follow a similar path there, and perhaps add in OOOE to the design specs.
If devs are familiar with the current system, why not just expand on that? A switch to a monolithic OOO CPU for example would leave the skills learnt from the previous 5 years working on XeCPU left in the mud, and take up space for developer niceties that could be used for perfromance. That's something doable, but in Sony's case, devs working on Cell now will be able to (assuming everything becomes Cell derived) move straight onto PS4 knowing eactly what's what, working in pretty much the same ways, with the Cell offering the higher performance per mm^2 from not spending transistors on Developer niceties.

The problem of paralisation isn't going away, so there's that to contend with. The only other traditional dev feature is OOO, and that'll consume space. I think something similar to XeCPU, maybe 9 cores, would be a good choice. Though I guess if MS can provide the compilers and tools for a new processor so development is transparent for the devs, it wouldn't matter.
 
pipo said:
Edit - There's also some PS3 stuff in there. Apparently Ken's first design was build around two Cell CPU's (no dedicated GPU!), after that they wanted to try a new version of the PS2 technology but in the end they went for a 'PC' solution (mostly because of the shader technology they didn't have).

There are papers out there about this. They detailed a version of openGL ES 2.0 whereby the scene would be split up into horizontal, vertical and even depth (?!) tiles, rendered in cache (?) on each spu and then composited.
Needless to say it was a tad too ambitious and at the end of the day wasn't going to be fast enough. Probably wasn't helped by openGL not being the ideal language for a tile based renderer (for a start...)
 
I believe that the decision of going with IBM was because the famous Cell Suzuoki Patent that was discovered in the US Patents around August 2003. You need to know your enemy for win.

And about Sony, The second Cell was the Visualizer, alias Reality Synthetizer that was different in the patent than the CPU Cell. I am sure that original Cell+RS idea was (CPU+Vertex Shader)+(Rasterizer+Pixel Shader).
 
Urian said:
I believe that the decision of going with IBM was because the famous Cell Suzuoki Patent that was discovered in the US Patents around August 2003. You need to know your enemy for win.

From the book it seems that the biggest issue was the fact that Intel wouldn't give up IP and MS wanted to cost reduce by redesigning components and (better) integration of both CPU and GPU (long term).

Intel woudn't let them.

On top of that, Intel tried to push a big, fat, hot Pentium, which they decided to ditch later anyway.
 
Shifty Geezer said:
If devs are familiar with the current system, why not just expand on that? *snip*

Yeah, I suppose you're right there. I was kind of just throwing an idea there with OOOE, but after thinking about it (and after some sleep!), it would make sense to just add more cores with a healthy increase in clockspeed. Supposing they start out at 32nm in 5 years time, they should be able to fit 8 times the number of transistors on-die than on 90nm. As per your suggestion, they can triple the number of cores or even quadruple it. Perhaps they should work on adding in "true" dual threads to avoid context switching? I'm not sure what the transistor cost of that would be. The rest of the space could then be taken up by a massive L2 cache.

In the die-shot of Waternoose, it looks like the 1MB L2 takes up about the same area as a single core (I'm not too clear on this, but it looks like they have it in 256kB chunks). With 12 cores @32nm, that's still half the area of 90nm three-core with which to play. They could then add up to 16MB of L2 (20MB total) to take up the remainder of the space (I assume linear space-consumption so that 1MB == area of 1 core as per the picture). That can be roughly split into just over 1.66MB per core... which I take it should be decent :?: This also assumes they make no changes to the cores at all. So scale back to 1MB per core, giving enough space approximately equal to 8 cores; allow for some changes to the 12 cores. I'm not too sure what would take 67% more die area, perhaps the transistors needed for two separate threads to exist simultaneously (to avoid context switching?). Someone please correct me on that!

At 32nm, I'd like to see what they'll pack into GPUs :oops:
 
pipo said:
From the book it seems that the biggest issue was the fact that Intel wouldn't give up IP and MS wanted to cost reduce by redesigning components and (better) integration of both CPU and GPU (long term).

Intel woudn't let them.

True. MS owning the IP and its ability to outsource (and redesign/shrink) the chips was paramount and ultimately what disqualified both Intel and Nvidia. (Besides the Intel chip offerings being too big/hot)

Although both were "considering" it to a degree.
 
Hmm, I think I've heard the dual cell idea before, I think it makes a lot more design sense (with the way cell is designed) than the current design where cell may or may not be able to help RSX with graphics.

I'd expect a multicored monolithic cpu for microsoft's next gen, because there's a lot they can do to improve performance per clock.

BTW, does the book say anything about Microsoft's decision to go with SiS for the chipset?
 
Urian said:
I believe that the decision of going with IBM was because the famous Cell Suzuoki Patent that was discovered in the US Patents around August 2003. You need to know your enemy for win.
I'd agree that the patents let MS know to what degree they needed to go to be in this race (and when, they absolutely expected Sony to launch in 2005). Not sure (from the book) if the Cell was the reason for choosing IBM, as the author made it seem that there were several reasons.

But I see your point about know thy enemy. :)
 
Last edited by a moderator:
pipo said:
Just a small thing, the Samsung chips were fine, the problem was with the Infineon chips.

And the PGR3 team discovered the problem in one of their dev kits by the way.

ahh, indeed, thank you
 
Fox5 said:
....

BTW, does the book say anything about Microsoft's decision to go with SiS for the chipset?
not really, just casually mentions choosing them for the South bridge.

and then this...:smile:

While the graphics chip and the CPU were the most time-sensitive, there was a lot of unsung work on the rest of the components. Taiwan’s Silicon Integrated Systems was designing the “south bridge,â€￾ or the input-output communications chip that allowed the major chips to communicate with the outside world or the hard disk drive. Adamec said the chip stayed on schedule and never rose high on the radar. That was thanks to an engineer named Yahbin Sim. He had visited Taiwan so often that the manager of the Ambassador Hotel in Taipei took him out to dinner. One of his counterparts at SiS invited Sim over so frequently that the man’s children called him “Uncle Yahbin.â€￾
 
Alstrong said:
Yeah, I suppose you're right there. I was kind of just throwing an idea there with OOOE, but after thinking about it (and after some sleep!), it would make sense to just add more cores with a healthy increase in clockspeed. Supposing they start out at 32nm in 5 years time, they should be able to fit 8 times the number of transistors on-die than on 90nm. As per your suggestion, they can triple the number of cores or even quadruple it. Perhaps they should work on adding in "true" dual threads to avoid context switching? I'm not sure what the transistor cost of that would be. The rest of the space could then be taken up by a massive L2 cache.

In the die-shot of Waternoose, it looks like the 1MB L2 takes up about the same area as a single core (I'm not too clear on this, but it looks like they have it in 256kB chunks). With 12 cores @32nm, that's still half the area of 90nm three-core with which to play. They could then add up to 16MB of L2 (20MB total) to take up the remainder of the space (I assume linear space-consumption so that 1MB == area of 1 core as per the picture). That can be roughly split into just over 1.66MB per core... which I take it should be decent :?: This also assumes they make no changes to the cores at all. So scale back to 1MB per core, giving enough space approximately equal to 8 cores; allow for some changes to the 12 cores. I'm not too sure what would take 67% more die area, perhaps the transistors needed for two separate threads to exist simultaneously (to avoid context switching?). Someone please correct me on that!

At 32nm, I'd like to see what they'll pack into GPUs :oops:

The revolution in performance in the next-gen consoles released in 2011 will probably come from memory technology. In a few years from now, when Microsoft gets out its shopping cart for the Xbox 720, Magnetic Ram (MRAM) could greatly influence the CPU architechture.
 
I really really really wanna read this book.

But I went to Barnes and Noble tonight and didn't see it. Not sure if I just couldn't find it or they dont carry it..I didn't ask any employee so I'm not totally sure they dont have it. I'm not even sure what section it would go in..there's no "videogames" book section.

Anyway I wanna start reading it tonight..how does this ebook work? I assume it's mostly for laptops (which I'm not on). Dont really wanna read it hunched over a desk..
 
sonyps35 said:
I really really really wanna read this book.

But I went to Barnes and Noble tonight and didn't see it. Not sure if I just couldn't find it or they dont carry it..I didn't ask any employee so I'm not totally sure they dont have it. I'm not even sure what section it would go in..there's no "videogames" book section.

Anyway I wanna start reading it tonight..how does this ebook work? I assume it's mostly for laptops (which I'm not on). Dont really wanna read it hunched over a desk..

You could print it out...
 
Back
Top