Xbox 360 eDRAM. Where are the results?

To honest, I really don't see the big point of the EDRAM. It's supposed to save money by cutting the framebuffer bandwidth out of the total bandwidth budget, but in return it ads the cost of the extra die and the special packaching that it requires.
True, it makes alphablending cheaper, but massive blending effects is on the way out in favour of pixelshader effects.
The power of Cell can be used to do very detailed bounding box culling checks, cutting out a lot of overdraw, I guess xenon could be used for something similar, so overdraw isn't a big deal either.
I have yet to see a 360 game with satisfactory AA, so for now the cheaper AA thing doesn't seem to be true.

Haven't played Crackdown?
 
I'm not so sure. I think you could argue that a lot of the best graphics on 360 are UE3. Gears, Mass Effect, etc. In fact, there doesn't seem to be many 360 games with outstanding GFX that ARENT UE3.

If anything my feeling is, UE3 runs great on 360, most likely better than it does on PS3 from what limited info we have so far, and that it has in a way single handedly "rescued" the 360 from suspicions of hardware inferiority.

And do you know what I find interesting about UE3? It doesn't use the edram.. Many look at that as a bad thing, but I look at it and wonder if not messing with tiling is in fact part of the reason it looks so good. Remember UE3 doesn't use AA, so it's not messing with tiling, the whole FB fits into the EDRAM, which probably greatly simplifies things.

I kinda think the 360 GPU is awesome, apart from the EDRAM. I mean, it does have 48 shaders and it is unified. The EDRAM is a necessary part of the design because they dont have a 256 bus, but I dont know that on balance it's a plus or minus. I think it might kinda even out. We dont know what the Xenos would do in a traditional 256 bus, maybe it would do better.

That's why it's a blessing as well, it allowed them to get great looking games out the door quickly, or at least faster than they would've otherwise. But on the other hand, they have adopted it very widely, which comes at the cost of not having custom engines built until 2 or 3 years into the lifecycle.

If you look at MS first party, only the Rare games, Halo, Blue Dragon and Forza do not use UE3 I believe. Alan Wake as well though it's not due until 2008.
 
It doesn't use the edram..
Let me clarify this as some don't seem to get it. EVERY game uses the edram as that's where the ROPs are. You seem to be talking about tiling which doesn't have to be used if the frame buffer is small enough, but the edram is used.
 
Without the edram xbox 360 would have roughly half the memory bandwidth of the ps3. Ignoring edram, if we compare the bandwidth of the 360 GPU to low rang pc GPUs they are similar, both have a 128bit bus. We know that this usually severally cripples low range GPUs in comparison to their midrange counterparts. Add to this that the 128bit bus in the 360 is shared between the CPU and GPU further reducing the effective bandwidth to vram. In typical cases a single CPU alone in a PC environment fully saturates memory buses of similar if not more capacity during game rendering, the 360 has 3 CPUs and a GPU all hitting the same bus. The fact that the 360 can crank out the kinds of scenes it does at 720p with such low bandwidth to vram is a huge testament to the effectiveness of the edram.

I'm so impressed with the results I think all GPUs should come with edram on a separate chip like the 360 GPU, sure the cost will be high initially but eventually these chips will become cheep and because they are separate from the GPU the same design can be used across many products. We need a chip with enough edram to support (1600X1200 with 8xAA) at a cost of around 40$. If one of the GPU vendors adds this while the other does not, it will be earthshaking.

This is why we keep hearing about memory companies investing in creating new edram chips and edram alternatives. I think they are getting very close to hitting a threshold in size, cost, and performance where CPU and GPU vendors will start to buy these chips in mass to include in their desktop products. This new demand will be huge in comparison to the markets that currently purchase these kinds of chips. In many ways these developments are out of the hands of the GPU vendors and depend solely on the memory companies ability to deliver.
 
Last edited by a moderator:
To honest, I really don't see the big point of the EDRAM. It's supposed to save money by cutting the framebuffer bandwidth out of the total bandwidth budget, but in return it ads the cost of the extra die and the special packaching that it requires.
That cost will scale down very quickly, especially if TSMC can take care of EDRAM. Using only 4 memory chips (eventually) and a simpler board design will be a big advantage down the road. The unified memory pool is great also, and if done with a 256-bit bus you'd have trouble scaling down your chips with time.

Microsoft wanted a unified memory pool to make it easier on devs, and they wanted a 128-bit bus for cost reasons. You have a better solution? TBDR is the only other possibility, and it's high-risk/low-reward.
True, it makes alphablending cheaper, but massive blending effects is on the way out in favour of pixelshader effects.
Pixel shaders and blending effects have very little to do with each other. Very rarely, if at all, does one replace the other.
The power of Cell can be used to do very detailed bounding box culling checks, cutting out a lot of overdraw, I guess xenon could be used for something similar, so overdraw isn't a big deal either.
Bounding box culling is good for frustum culling, thus reducing geometry, not overdraw. For overdraw reduction you do rough front to back sorting to help the GPU use early z-reject.
I have yet to see a 360 game with satisfactory AA, so for now the cheaper AA thing doesn't seem to be true.
PS3 fans are so quick to say that we need to give it time for Cell to be used, but why shouldn't XB360 devs be given time to implement tiling? It'll appear in games quite soon.
 
Last edited by a moderator:
But I have never heard any of the devs talk about the amazing things they will be able to do with 256GB/sec of FB bandwidth...

I'm pretty sure Capcom said they we're using the edram for some of those amazing effects in Lost Planet.

But I'm also pretty sure you could find a way to do something which looks almost the same on any other platform.
 
That cost will scale down very quickly, especially if TSMC can take care of EDRAM. Using only 4 memory chips (eventually) and a simpler board design will be a big advantage down the road. The unified memory pool is great also, and if done with a 256-bit bus you'd have trouble scaling down your chips with time.

Trouble is, the EDRAM does not completely equal/replace the missing bandwidth, and it comes with its own set of problems and limitations (the dramatic climb in resources taken when progressively increasing resolution, being one).
It’s still an open question what is best, bandwidth saving hardware and software techniques or e/EDRAM?

Microsoft wanted a unified memory pool to make it easier on devs, and they wanted a 128-bit bus for cost reasons. You have a better solution? TBDR is the only other possibility, and it's high-risk/low-reward.

I’m just not convinced that EDRAM is a good solution for HDTV resolutions at this time. Maybe in 5 years time, when processes have improved, but not know.

Pixel shaders and blending effects have very little to do with each other. Very rarely, if at all, does one replace the other.

Oh I can think of quite a few, smoke and dust being one of the first that comes to mind. You’ll still need alphablending, I’m not debating that, just not in the exorbitant amounts that you used to need to achieve certain effects
.
Bounding box culling is good for frustum culling, thus reducing geometry, not overdraw. For overdraw reduction you do rough front to back sorting to help the GPU use early z-reject.

You are talking about current implementations. With the kind of power present on Cell things are quite different. It’s much cheaper to not even start processing a complex piece of geometry if it isn’t needed in the scene.

PS3 fans are so quick to say that we need to give it time for Cell to be used, but why shouldn't XB360 devs be given time to implement tiling? It'll appear in games quite soon.

Cell is a CPU, tiling is a single function in the xenon/xenos.
Devs. have had almost 1.5 years to implement it properly. I’m not saying it’s never going to happen, but a lot of signs point to it being far from as easy, cheap and hassle free as microsoft made it out to be.
 
I'm pretty sure Capcom said they we're using the edram for some of those amazing effects in Lost Planet.
What does that even mean? "using the edram"? All 360 games "use" the eDRAM in the sense of it being a framebuffer. That 256 GB/sec is not free reign bandwidth that you can use any way you want -- the only way to get at all of it is to crank up the AA to 4x. It doesn't necessarily *do* anything that another GPU doesn't or can't do -- it just does it without being bandwidth limited.

Oh I can think of quite a few, smoke and dust being one of the first that comes to mind. You’ll still need alphablending, I’m not debating that, just not in the exorbitant amounts that you used to need to achieve certain effects
How are smoke and dust an example of not needing alpha blending? I can see it as an example of not needing any special pixel shaders (maybe a soft particle one if clipping is a problem), but definitely not reducing the need for or replacing blending operations. Yeah, it's not unusual to have a pixel shader that affects the alpha, but that's not the same as actually doing the blend op. I know it's not 100% impossible to essentially perform blending within a pixel shader, but there are imperceptibly few cases where it can actually be a win right now.

Devs. have had almost 1.5 years to implement it properly. I’m not saying it’s never going to happen, but a lot of signs point to it being far from as easy, cheap and hassle free as microsoft made it out to be.
Haven't there been a few dozen threads already saying as much? MS' claims of free and 5% extra load have been from the hardware side of the picture (i.e. no framebuffer bandwidth issues, a few tris processed more times than normal). It's definitely not like that at all on the software side, and that's not going to change.

Whether it's worth it or not for a given project is up in the air, but tiling will surely be used by someone at some point. And I'm sure Microsoft's own studios will have people with bullwhips at their backs while Ballmer is yelling, "Use tiling! Enable 4xAA! The power of Gates compels you! Your name is Toby!" I doubt many multiplatform titles will use it.
 
I'm so impressed with the results I think all GPUs should come with edram on a separate chip like the 360 GPU, sure the cost will be high initially but eventually these chips will become cheep and because they are separate from the GPU the same design can be used across many products. We need a chip with enough edram to support (1600X1200 with 8xAA) at a cost of around 40$. If one of the GPU vendors adds this while the other does not, it will be earthshaking.
I agree that edram makes a lot of sense and it seems that most of the talk of this sort tends to be console warriorism rather than it being technically a bad choice. It's kind of ironic because it's basically addressing the same problem that Cell's LS is: bandwidth limitations. The faster your processor gets the easier it is to become bandwidth constrained and then it doesn't how many fp operations a second you can do, you're stuck. You can say that 10MB or 256k isn't enough, or that the dual die design is a kludge (it is) but that'll be addressed by future iterations on smaller processes.

Barring a miracle bandwidth breakthrough I'd be really surprised if this doesn't become the dominant design, especially with the trend to mobile devices that don't have the space for big graphics cards. AMD seems like they grokked on the potential pretty quickly and bought up ATI, we'll see what Intel ends up doing. There is definitely a problem of diminishing returns on just adding more processor cores, a cpu/gpu hybrid with access to a bank of universal shader type fp units and a big chunk of edram to ease the pain of shared memory would be much more compelling than an extra 2 cores. You'll always be able to make something better if money isn't an object, but most people don't want to pay $600 for a video card.
 
Whether it's worth it or not for a given project is up in the air, but tiling will surely be used by someone at some point. And I'm sure Microsoft's own studios will have people with bullwhips at their backs while Ballmer is yelling, "Use tiling! Enable 4xAA! The power of Gates compels you! Your name is Toby!" I doubt many multiplatform titles will use it.

I am sure you already knew this ( :p ) but to clarify for others quite a few titles are using tiling. VT3 is 1080p with 2x MSAA which is a framebuffer in excess of 30MB (unless they went with a small Z-buffer?). There are a lot of titles now using tiling, some even having been confirmed here (e.g. Splinter Cell 4). Viva Pinata and Forza Motorsport do as well.

@ Mintmaster: I agree. From a design perspective it allows for significant cost reduction (128bit bus versus 256bit bus, lower frequency/more common memory, eDRAM module will reduce faster in cost than memory modules, MB complexity) and has some system benefits (UMA). If tiling wasn't an issue at 720p (e.g. about 15MB eDRAM for 2xMSAA) I think there would be few complaints as designing an engine to effeciently tile is one of the few complaints (like using a render target still has to go over the busses and isn't benefitng from the eDRAM in the same way if the entire memory pool was this wide). The alternatives (TBDR, higher frequency memory / larger bus) are risky or expensive. And just tossing faster/hotter modules onboard doesn't necessarily resolve all the memory bottlenecks. On the PC side an RSX like GPU has nearly 50GB/s of bandwidth for itself with no CPU interferance. So even with tiling issues an eDRAM based design caters to the high bandwidth clients resulting in performance results in excess to what a system with 22GB/s of shared memory would otherwise be able to attain.
 
If next-gen consoles; Xbox720 / PS4, are only going to have 256-bit external buses, then EDRAM of some kind, of some configuration, will certainly be necessary.

sorry if that sounds like me being "captain obvious", but... :)
 
How are smoke and dust an example of not needing alpha blending?
Did I say that?
You’ll still need alphablending, I’m not debating that, just not in the exorbitant amounts that you used to need to achieve certain effects
If you have a dust or smoke volume, you could use pixelshaders to get the right ratio of blending depending in the thickness of the cloud at a given point. Instead of just alphablending a bunch of billboard particles with 10x overdraw to get a similar but cruder looking effect.
 
Whether it's worth it or not for a given project is up in the air, but tiling will surely be used by someone at some point. And I'm sure Microsoft's own studios will have people with bullwhips at their backs while Ballmer is yelling, "Use tiling! Enable 4xAA! The power of Gates compels you! Your name is Toby!" I doubt many multiplatform titles will use it.

Tiling is being used by quite a few games allready. As far as 4x AA and decent framerate goes, Forza 2 according to the developers, is 4x AA @ 60fps
 
Well, I was kind of thinking about the not-so-obvious cases like 1st party titles (as they're really not worth mentioning as a case of the use of predicated tiling actually *spreading*). It's also rather trivial for something like a tennis game which pretty much has no work to do graphically to use tiling as even issuing every polygon 5 times still means less than a million tris per frame moved down the pipe. I don't see multiplatform devs, for instance, even considering and rather trying to find as many ways as possible to avoid it.

Did I say that?
I was referring to the fact that you responded to Mintmaster's thing about shaders and blending operations not being able to replace one another by bringing up dust/smoke as a counterexample.

If you have a dust or smoke volume, you could use pixelshaders to get the right ratio of blending depending in the thickness of the cloud at a given point. Instead of just alphablending a bunch of billboard particles with 10x overdraw to get a similar but cruder looking effect.
Meh. Volume rendering of smoke or some other flow is a nice exercise, but it's hardly reasonable for use in a real game on either console. Maybe DX10 games, but even that would be pretty limited use, I think. I've also yet to see an application thereof (in realtime, that is) that is actually more visually convincing in the sense that, yes it doesn't look so discretized like particles, but it also looks too "smooth" and "simple" to pass for convincing.
 
Well, I was kind of thinking about the not-so-obvious cases like 1st party titles (as they're really not worth mentioning as a case of the use of predicated tiling actually *spreading*).

On the topic of 3rd party titles using tiling:

GRAW 2 seems to be 720p with 2xAA
http://media.teamxbox.com/games/ss/1619/full-res/1173411641.jpg

NHL 07 as well?
http://media.teamxbox.com/games/ss/1516/full-res/1155929242.jpg

Rainbow 6 Vegas
http://media.teamxbox.com/games/ss/1439/full-res/1171392118.jpg

Saint's Row
http://media.teamxbox.com/games/ss/1217/full-res/1152772109.jpg

So, it seems EA and Ubisoft are both using tiling in their custom engines, the two biggest 3rd party publishers in the game, so that's a good sign. Also, the majority of games that were really built for nextgen come out this fall, Assasin's Creed, Brothers in Arms etc
 
GRAW2 seems to have 4xAA to me. R6 Vegas doesn't have AA (probably bullshot here). But many other 3rd party games do have AA.

Brothers in Arms is running on UE3.
 
Back
Top