So if indeed that smaller die is the eDRAM, that means MS has beaten both AMD and Intel in releasing a CPU w/GPU on die.
Regards,
SB
But didn't Sony beat all of them to this punch with the PS2?
Regards,
SB
So if indeed that smaller die is the eDRAM, that means MS has beaten both AMD and Intel in releasing a CPU w/GPU on die.
Regards,
SB
So if indeed that smaller die is the eDRAM, that means MS has beaten both AMD and Intel in releasing a CPU w/GPU on die.
Regards,
SB
If we're gonna go there, I'd give credit to Nintendo for that with the original Gameboy...But didn't Sony beat all of them to this punch with the PS2?
If we're gonna go there, I'd give credit to Nintendo for that with the original Gameboy...
Of course, there's probably examples of even older integrated designs out there than the GB when graphics capabilities are so rudimentary...
If you're talking about Arrandale, the GPU+Memory controller is on a separate die there. It will get combined when Intel is able to make more 32nm wafers.And I think there's some CPU's with low end integrated graphics combined already, right?
All things considered, the shrinks were quite "bad". Though Jasper->Valhalla is sort of ok I guess at 65% the die size - I wonder if that die also includes some empty space because the cpu and gpu probably are just stitched together. Though considering the combination should also eliminate some i/o it doesn't look that well neither.So roughly speaking its 34% smaller than Jasper overall in combined CGPU die sizes and 53% smaller than the original CGPU.
All things considered, the shrinks were quite "bad". Though Jasper->Valhalla is sort of ok I guess at 65% the die size - I wonder if that die also includes some empty space because the cpu and gpu probably are just stitched together. Though considering the combination should also eliminate some i/o it doesn't look that well neither.
The really bad shrink was actually Xenon->Jasper (at 71% die size), for a full node shrink in particular the cpu did terrible (well if it really was at 90nm to begin with).
The edram doesn't look to me like it is at 45nm (too big for a shrink from 80nm) - maybe 55 or 65nm. Looks like there are still challenges in integrating this, as it shouldn't have increased total die size a lot. It wouldn't have surprised me though if this were integrated as well - ibm has certainly shown it can do it (used on power7). Well maybe next revision, if there is one...
Some good points. In fact Xbox cpu should have only half the cache (1MB vs 2MB), though I don't know how much cache the GPU has. I thought though the cpu was ~165M transistors and the GPU ~232M which would be "only" about 100M more. The memory interface however has the same width (2x64bit gddr3). And, the propus core definitely also includes unused die area. Still, compared to that the CGPU indeed doesn't look half bad. Keep in mind however the graphic part is typically packed more densely (redwood, for example, while being 40nm not 45nm, is over 600M transistors at only 104mm^2).For comparisons sake the AMD Propus Athlon II X4 core is 300M transistors and identical in size at 168mm^2. By comparison the Xbox 360 CGPU has 130M more transistors, same quantity of L2 cache and a memory interface which is twice as wide. I can't see any obvious reason to make a complaint about the ratios which they shrunk the chips over the various process nodes whilst bearing in mind that the majority of chip is logic not cache.
I think having two separate dies adds packaging costs. IFF edram can be integrated without issues, it shouldn't affect yields a lot, since it shouldn't increase die size much (still well below 200mm^2 at 45nm).In any case they probably didn't have any significant reason to bother, at 45mm^2 its not like the ED-Ram is costing a significant quantity of money to fabricate and may very well increase their overall costs if they were to integrate it considering the yields for a larger main CPU die would be worse if they had a larger chip. They do not have the luxury of selling their chips at $500+++ like IBM does, so yields matter significantly.
Some good points. In fact Xbox cpu should have only half the cache (1MB vs 2MB), though I don't know how much cache the GPU has. I thought though the cpu was ~165M transistors and the GPU ~232M which would be "only" about 100M more. The memory interface however has the same width (2x64bit gddr3). And, the propus core definitely also includes unused die area. Still, compared to that the CGPU indeed doesn't look half bad. Keep in mind however the graphic part is typically packed more densely (redwood, for example, while being 40nm not 45nm, is over 600M transistors at only 104mm^2).
It's really not so much the size just being too big - it just looks to me like given the size it was at 90nm it didn't shrink particularly well.
I think having two separate dies adds packaging costs. IFF edram can be integrated without issues, it shouldn't affect yields a lot, since it shouldn't increase die size much (still well below 200mm^2 at 45nm).
Agreed with all of that. It's very snappy compared to even my Jasper unit. It's now the quietest CE device I own (with games installed).
In terms of costs, would they be setting aside a certain quantity of money per Xbox 360 sold for the RROD/E71 3 year warranty? If so then the revision could be a significant cost saving against previous Xbox 360 models because I doubt that the $1,000,0000,000 set aside would cover more than the original units sold.