They are licensed to use the PPC design, not buying it. So IBM is making money too. It's the same for the AMD GPU.If Microsoft sells more Xenons, only Microsoft makes more money.
They are licensed to use the PPC design, not buying it. So IBM is making money too. It's the same for the AMD GPU.If Microsoft sells more Xenons, only Microsoft makes more money.
I think this is a really good point. Microsoft basically owns the Xenon design. They paid IBM to design it and then IBM turned it over to them. It really was "contracted out" rather than being some sort of big collaboration. IBM isn't even fabricating all the Xenons; Microsoft has a second-source supplier. So, this is one of the reasons that IBM isn't hyping up the Xenon chip: they have nothing to gain from it! If Microsoft sells more Xenons, only Microsoft makes more money. I think IBM gets a cut on each Cell chip sold...
So, this goes back to my point about the Xenon being a reasonable part. It is three cores on 90nm (165 million transistors total). For the next generation XBox it would be on at least 45nm but more likely even 32nm. It seems like IBM could just put several cores Xenon cores on a chip (8 or more) and make a pretty reasonable processor for the next XBox.
I would still assume a companion GPU chip (Xenon is not a GPU).
In contrast, Larrabee (like it or hate it) is targeting the GPU space.
What's wrong with the Larrabee being the GPU for the next Xbox even if the CPU is Xenon with more cores ? Also couldn't IBM design their own Larrabee like GPU with PowerPC cores for next Xbox ?
I respect that you disagree. Let me try to convince you anyway... I also see I've now got myself in heated debate in two threads, which means I'm unlikely to keep up for long...
I was thinking the non-GPU computations of gaming actually. As that is what Xenon was solely designed for.
This is mostly due to contracts and economic incentive. Xenon was bought and paid for by Microsoft. IBM design the chip and just turn the design over to Microsoft. It wouldn't surprise me if IBM can't even sell Xenon chips if it wanted to.
In contrast, Wikipedia estimates $400 million was spent on the R&D for Cell. I think IBM gets a cut on each Cell chip sold (as they still own IP on the design).
One more reason I'm not impressed with Cell: when Cell started out, it was going to be the *GPU* and *CPU* for PS3. It was more like AMD Fusion (or something) in that regard. In the end, Sony realized it was going to really suck as a GPU, so they quickly (in a huge panic) talk to NVIDIA to get them out of a tight spot.
I apologize for going off topic. My main point wasn't to trash Cell (or, just to trash Cell), but to say that Microsoft was happy with Xenon, why wouldn't they just go back to IBM again?
I think more than anything AP, some of your statements here betray a lack of familiarity with the Cell project... and even to an extent with the XeCPU, despite your fondness for the later.
Keep in mind that both the Cell's PPC core and the Xenon's trio shared a common origin/design based on earlier IBM prototypes.
Indeed MS owns the IP for the processor, in the sense that it has been licensed to them, but that certainly does not preclude IBM from using a near-identical design for their own purposes.
If there is an advantage you see in this chip vs Cell itself or other alternative architectures in the scientific space, you're not doing a good job of clarifying what you feel it is.
Yes they could... but will they? Even if IBM were to design the next XBox chip, which I suspect they will as well, I doubt there wouldn't be changes of some significance made. ... So, whatever happens with Sony on their CPU, I would expect some changes for XeCPU - though while maintaining direct lineage of course for legacy 360 code.
Well but that's the crux of it all though, isn't it? And again I'm a fan of what it's looking to achieve, but the question on whether Larrabee will be successful in the GPu market is still an open question.
And either way in truth it must be approached from an angle that sees convergence across what until now have been distinct architectural paradigms (CPU vs GPU).
For the record although I think you're keyed in to a common misconception here based on the original Cell BE patent, the Cell was not ultimately meant to be the original GPU. That was going to be a Toshiba design and we'll leave it at that for now. But yes, they switched to NVidia, IMO mainly due to ISA/approachability concerns. Just wanted to clarify though that although the patent had Cell as rasterizing, the actual design work centered around a separate GPU (though very exotic in its own right).
They are licensed to use the PPC design, not buying it. So IBM is making money too.
I was under the impression that when IBM agreed to build the chips for all three game consoles (Wii, PS3, and XBox 360), Sony and Toshiba got sort of worried. When I visited IBM in Austin back in 2003, you needed separate "STI" credentials to get into the buildings in which Cell was being design. That is, a regular IBMer not involved with the project didn't have access. From what I understand, this was part of the internal firewall between the Xenon designers and the Cell designers. For good reasons, Sony just didn't want all the R&D to go over the XBox. So, we get things like the Xenon and Cell both using 128-register 128-bit SIMD, but they aren't binary compatible (or quite the same instructions, from what I can tell). The SPEs aren't really PowerPC as much as some new ISA inspired by PowerPC, but I digress.
I can certainly believe that Xenon and Cell's bigger dual-threaded core could certainly have a common ancestor, or perhaps that was the one part of Cell they could share. I suspect some of the work on Power6 (which is also in-order with two threads) could have impacted both, but I really don't know the relative timeframes of the three projects or how they interrelate.
I'm pretty sure Microsoft has patents on some of the new instructions in Xenon.
As I expressed somewhat on the other thread, I'm really a fan of the cache-coherent shared-memory model of todays multi-core CPUs, Larrabee, and Xenon.
I'm not so much a big fan of the Cell and GPU-style of memory management. It is just a bias I have, I guess. When I look at Cell's message passing and full/empty bits stuff, it just reminds me of all the supercomputing companies throughout the decades that failed.
The really interesting thing about Larrabee is that it is the CPUs overtaking GPUs rather than just a meeting-in-the-middle type convergence (just as GPGPU is really about GPUs taking over the CPU's role).
Yet, clearly Cell is IBM's favorite son, and Xenon is the red-headed step child. I guess Cell gets all the press and attention from IBM (and consequently the media), while a quite reasonable chip such as Xenon gets somewhat ignored. If I was one of the engineers on Xenon, I'd be annoyed.
Yet, clearly Cell is IBM's favorite son, and Xenon is the red-headed step child. I guess Cell gets all the press and attention from IBM (and consequently the media), while a quite reasonable chip such as Xenon gets somewhat ignored. If I was one of the engineers on Xenon, I'd be annoyed.
But as an aside though, it's worth mentioning the XeCPU's cache is prone to thrashing... or such is the word on the street.
Xenon is a mediocre design at best, that just happens to get the job barely done and given the timing was MS's best bet.
Specifically as it relates to the ISA, this post/translation from a retrospective on the subject is a nice window into the past:
http://forum.beyond3d.com/showpost.php?p=517078&postcount=28
And certainly as to the decisions that went into Cell as a whole, it's a good thread in general.
I forget the name/code of the specific core, but they were based on an IBM prototype circa ~2000 that was a testbed for high-clock, in-order design. How much they both (Cell PPE vs Xenon core) shared in common development from that point, and how much of their similarities are the result of parallel development, I've certainly wondered myself as well...
Well again though, it goes back to markets. XeCPU's not going to go up against IBM's Power 6 in the server arena, and I don't see where it competes with Cell in the HPC space...
When you take a closer look at Xenon it's pretty clear that it was developed in a rush.
Some of the things like store-queue gotchas should never have been in the final product.
And then there are indications that it was supposed to run a lot faster than it does: Two cycle basic ALU result-forwarding latency, six (!!!) cycle load-to-use latency in the D$ while only hitting 3.2GHz on a state of the art process.
It's also pretty big for a dual issue in-order core (larger than an Athlon-64 core !!), it was probably laid out using automated tools.
There is a lot of room for improvement for the next XBox.
And Larrabee, for...oh, no particular reason...A 6 cycle load-to-use latency does seem pretty grim. Any idea what it is for the Cell's PPC or SPUs?
Can somebody remind us the size of spe core (without Ls)
Xenon 90nm, also from an IBM paper:The SPE design has roughly 20.9 million transistors, and the chip area including the SMF is 14.8 mm2 (2.54 mm x 5.81 mm) fabricated with a 90-nm silicon-on-insulator (SOI) technology. The 65-nm version of the design is 10.5 mm2.
And Larrabee, for...oh, no particular reason...
And why not a larrabee core?