Codename Vejle: Xbox 360 S combined CPU/GPU

AzBat

Agent of the Bat
Legend
Dean Takahashi with VentureBeat got the goods on the the new Xbox 360 S CPU/GPU. And evidently Valhalla wasn't the codename. The codename is Vejle instead.

xbox-3.jpg


xbox-2.jpg


xbox-1.jpg


Check out the rest of the article here...

http://games.venturebeat.com/2010/0...-processor-graphics-chip-in-the-new-xbox-360/

Tommy McClain
 
Very cool (literally). Thanks for the article.

Just wondering, is it possible MS could imprint two of these chips onto one piece of silicon, link them via an HT bus a la Core 2 Quads for a 360 successor?
 
Pretty cool I'll agree. This comment is definitely interesting...

You can’t just glue the two designs together. Rather, IBM had to get rid of its main communications channel between the chips, dubbed the front-side bus, and build a substitute for it.

More info from PC Magazine...

http://www.pcmag.com/article2/0,2817,2368176,00.asp

Evidently the details were announced today at the Hot Chips conference.

Tommy McClain
 
It's too bad they can't shut up the DVD-ROM and make a nice little quiet box. That thing drowns out everything else most of the time. Maybe they should add a big 'ol internal ~500GB HDD and let you install your games to it.

Let me tell you that doing that sure made my Xbox 1 quiet. ;) And load times? What load times?
 
It looks like they were I/O limited on the GPU between the ED-Ram and memory bus, and it looks like they are going to be I/O limited again if they try to shrink it. So I take it we can expect CPU/GPU/ED-Ram on the same die in 2011 on the 32nm process?
 
Nice find!

What I want to know is how much each chip costs to make...;)

Any guesses?

Without knowing how much they pay IBM or ATI and others for royalties, it is hard to say how much each chip truely costs them. It would be the overall costs which would be important, not strictly the per chip manufacturing.
 
What's most interesting, it appears that while the CPU was basically a straight shrink all the way down to the integrated die, the GPU was redesigned (layout is different) for each shrink.

Did, AMD or IBM do the redesign of each iteration of the GPU?

Regards,
SB
 
considering ibm is the chip designer how much have MS may have helped their competitors in developing their own integrated chip?

cell + RSX or does nvidia hold a deathgrip on their chip.

It's too bad they can't shut up the DVD-ROM and make a nice little quiet box. That thing drowns out everything else most of the time. Maybe they should add a big 'ol internal ~500GB HDD and let you install your games to it.

Let me tell you that doing that sure made my Xbox 1 quiet. ;) And load times? What load times?

where you been, you CAN install 99% of games directly to the hdd which completely shuts off the dvd drive and the new consoles have 250GB of storage which is enough for 30+ installed games
 
Last edited by a moderator:
I'm pretty sure the FSB linking the two chips was designed with an eye on it disappearing, eventually.
 
Jon Stokes have a comment about the chip at arstechnica apperantly they intoroduced a module to present latency between CPU and GPU imitating as if there is a FSB connection.

http://arstechnica.com/gaming/news/...ntel-amd-to-market-with-cpugpu-combo-chip.ars

Quote: It would have been easier and more natural to just connect the CPU and GPU with a high-bandwidth, low-latency internal connection, but that would have made the new SoC faster in some respects than the older systems, and that's not allowed. So they had to introduce this separate module onto the chip that could actually add latency between the CPU and GPU blocks, and generally behave like an off-die FSB.
 
Jon Stokes have a comment about the chip at arstechnica apperantly they intoroduced a module to present latency between CPU and GPU imitating as if there is a FSB connection.

http://arstechnica.com/gaming/news/...ntel-amd-to-market-with-cpugpu-combo-chip.ars

Quote: It would have been easier and more natural to just connect the CPU and GPU with a high-bandwidth, low-latency internal connection, but that would have made the new SoC faster in some respects than the older systems, and that's not allowed. So they had to introduce this separate module onto the chip that could actually add latency between the CPU and GPU blocks, and generally behave like an off-die FSB.

Why is it not allowed to have a faster connection? As long as game validation/testing takes place on older 360's, I don't see a problem. So what if some games tore less/dropped fewer frames on the slim...it'd be a nice incentive to upgrade if you think about it.
 
What's most interesting, it appears that while the CPU was basically a straight shrink all the way down to the integrated die, the GPU was redesigned (layout is different) for each shrink.

Did, AMD or IBM do the redesign of each iteration of the GPU?

Regards,
SB

It's probably automated place&route for the most part, creating different layouts based on the constraints of the given process and the dimensions of the rectangle that the logic must fit in.
 
considering ibm is the chip designer how much have MS may have helped their competitors in developing their own integrated chip?

I was kinda thinking the same thing, if Nintendo bought a license to a ~400SP DX11 ATI GPU, they could get IBM to build them something very tasty for very little, using the experience they've built up here but targeting a 32nm process.

Double/triple the L2 cache amount, throw on a few more PowerPC cores and bump up the eDRAM so that its big enough to accommodate a full 1080p or 720w/2xmssaa framebuffer (~14MB, right?) and feed it with 1GB of high endGDDR5 instead of 512MB of low end GDDR3 and you've got yourself a very tasty little next generation console. It should decimate the 360's performance and remove most of the niggly bottlenecks (no need for titling to support AA, roughly triple the external bandwidth, twice the memory capacity, no puny L2 cache, you can actually take advantage of the lower latencies achieved through the integration, much higher and more accessible FLOPs and tessellation support that's actually worth a damn) while keeping it super developer friendly.

Very impressive feat. nonetheless and it looks to have beat AMD's fusion to the market by 6 months at least which could be an interesting footnote in history.
 
Last edited by a moderator:
bump up the eDRAM so that its big enough to accommodate a full 1080p or 720w/2xmssaa framebuffer (~14MB, right?)

In an age of deferred renderers, you're going to need a whole lot more for all the MRTs if you want to "solve" the tiling issue.
 
In an age of deferred renderers, you're going to need a whole lot more for all the MRTs if you want to "solve" the tiling issue.


So what would you say is a suitable amount of eDRAM then? Does having the main opaque geometry buffer all fitting snugly into eDRAM not cause a worthwile speedup? Is this not the reason we see plenty of titles that target higher framerates yet still use a lot of transaprency effects like the Call of Duty games or Bayonetta go with a framebuffer format that fits into the 10MB eDRAM? Would an extra 4MB of eDRAM really be all that expensive in 2011?

The eDRAM seemed to have been a design win for the 360 by most accounts so is ditching it entirely really a smart move? Or is 10MB always going to be enough no matter what resolution/AA level you target?
 
Last edited by a moderator:
Back
Top