Xbox 360 eDRAM. Where are the results?

Yeah but some of those games are running (or will be running) on PC GPU's with far less FB bandwidth just as well as they are on the 360. So I can't see that they are relying heavily on FB bandwidth for this type of stuff.

GRAW on PC was a shadow of the console version.

Anyways, like I said, it's probably going to be first parties that extract the true potential from the GPU, and right now 80% of the games are UE3.0 based, so there's your answer. It Wasn't built with tiled rendering in mind, and can't use Xenos to it's fullest.
 
GRAW on PC was a shadow of the console version.

Would definatly have to disagree with that on a graphical basis. Gameplay wise they both bored me but graphically they were pretty evenly matched IMO. Although I always thought GRAW was massively over rated as a graphical showcase anyway.

Anyways, like I said, it's probably going to be first parties that extract the true potential from the GPU, and right now 80% of the games are UE3.0 based, so there's your answer. It Wasn't built with tiled rendering in mind, and can't use Xenos to it's fullest.

Yes but tiling only effects the res and AA you use. Many games use less AA and/or lower res to fit the FB onto one tile. They are still rendering to eDRAM though so should still see the bandwidth benefits.

The thing is there's a lot of discussion on this site, much of it involving devs about the relative merits of each system and which will end up having better graphics. Generally the conclusion seems to be PS3 if they manage to leverage Cell well but not by much. But I have never heard any of the devs talk about the amazing things they will be able to do with 256GB/sec of FB bandwidth later in the 360's life that simply won't be possible on the PS3.

Perhaps a more clean cut question would be what could devs do on 360 that they couldn't do on a G80 powered PC. That way it removes the Cell element and shifts all the focus onto the 360's one single advantage - its massive FB bandwidth.
 
Yes but tiling only effects the res and AA you use. Many games use less AA and/or lower res to fit the FB onto one tile. They are still rendering to eDRAM though so should still see the bandwidth benefits.

I think something like 97% of the games render at 720p, and most use 2xAA as well. But I'll just shut up now and let some Dev's chime in.
 
Yes but tiling only effects the res and AA you use. Many games use less AA and/or lower res to fit the FB onto one tile. They are still rendering to eDRAM though so should still see the bandwidth benefits.

The thing is there's a lot of discussion on this site, much of it involving devs about the relative merits of each system and which will end up having better graphics. Generally the conclusion seems to be PS3 if they manage to leverage Cell well but not by much. But I have never heard any of the devs talk about the amazing things they will be able to do with 256GB/sec of FB bandwidth later in the 360's life that simply won't be possible on the PS3.

As far as i know the eDRAM is there for AA, z-buffering and alpha blending, what kind of amazing things do you expect?

Perhaps a more clean cut question would be what could devs do on 360 that they couldn't do on a G80 powered PC. That way it removes the Cell element and shifts all the focus onto the 360's one single advantage - its massive FB bandwidth.

I'm sure the G80 outmatches Xenos in most scenarios and shifting focus onto single elements in consoles is plainly wrong.
 
To honest, I really don't see the big point of the EDRAM. It's supposed to save money by cutting the framebuffer bandwidth out of the total bandwidth budget, but in return it ads the cost of the extra die and the special packaching that it requires.
True, it makes alphablending cheaper, but massive blending effects is on the way out in favour of pixelshader effects.
The power of Cell can be used to do very detailed bounding box culling checks, cutting out a lot of overdraw, I guess xenon could be used for something similar, so overdraw isn't a big deal either.
I have yet to see a 360 game with satisfactory AA, so for now the cheaper AA thing doesn't seem to be true.
 
"Not this shit again and oblivion on 360 only has 2xAA indoors"

Bullcrap what are you smoking it most certainly has AA both indoors and outdoors,
 
I'm sure the G80 outmatches Xenos in most scenarios and shifting focus onto single elements in consoles is plainly wrong.

Im sure it does too but the situation still remains that framebuffer bandwidth is one of the key driver of graphics performance in any system (I think) and Xenos has ~3x more of it than G80. So surely there are things devs think they can do with that that simply wouldn't be possible on G80 (and to a greater extent PS3). Could we ever see a game on 360 for example that its not possible to run on an R600 or a pair of G80's without at least downgrading some aspect of its graphics to account for the lower FB bandwidth?

I just want to know if thats really the case, or if there are caveats around the huge bandwidth figure. By caveats im thinking in terms of, is it simply overkill and not needed unless you partake in wastefull programming? Does the lack of certain types of compression that exist in other GPU's have a major effect etc...
 
...raw bandwidth of 256GB/sec as you all know...
...
My understanding is that framebuffer bandwidth is one of the most important and influential elements of a GPU so I just don't get why we arn't seeing the difference between 360 and PS3.

I think you missed the (in my opinion) important point, that the 256GB/s is not really between GPU and EDRAM (at least I believe that's what you're thinking - sorry if I'm wrong ;)). That bandwidth would be only 32GB/s. The 256 GB/s are the bandwidth between the ROPs (or what it's called - I'm no GPU guy) and EDRAM.
 
It should be noted that the 256GB/s of bandwidth between those ROPs and the eDRAM isn't compressed. AFAIK, and somebody correct me if I'm wrong, ATI and nVidia have both invested quite a bit in compression between the ROPs and main memory for all PC GPUs. 256GB/s can essentially be completely used, and as such it isn't "overkill"--because it's exactly the amount necessary. 8 bytes per sample (4 bytes color/4 bytes z), 4 samples (4xAA), times 2 (read-modify-write), with 8 ROPs, running at 500MHz - 8*4*2*8*500000000 = 256GB/s. But, again, there isn't any form of compression there.

And ultimately, the result of eDRAM is that the system isn't so completely, horribly bottlenecked by a SINGLE 128-BIT BUS as it would have been without the eDRAM. The eDRAM was a design choice, just like using two pools of memory in the PS3 was a design choice....without eDRAM, the system would look quite different from the way it is now. eDRAM wasn't ever going to make "brand new effects" possible... just make some of them feasible, assuming they were worked into the engine at the proper time, and the tradeoffs were acceptable. Because there's always some kind of tradeoff.

This topic... I think it would be far better to ask if the tesselation unit, memexport, or the features where Xenos is performant (i.e., pixel shader branching, relative to pre-G8x +/R5xx + GPUs, and vertex texture fetch/filtering) have had any practical use in a game out/in development YET, and if so/if not, if it can go anywhere in the future/where it can go in the future.

(EDIT: and to clarify, I mean if it makes some things that weren't possible/feasible performance wise possible due to a different method, and so on, along those lines)

But that's just me... and I'm as crazy as they come, right?
 
You're right Sousuke. The eDRAM BW can't be confused with total BW. It's specific to certain operations like resolving AA. Where a G80 has say 80 GB/s BW, that's useable for textures and models and multiple buffer rendering/render to texture, etc. Xenos has 22 GB/s shared with CPU for much of that. The big difference is framebuffer operations don't need any RAM BW, so that's a large saving.
 
I have yet to see a 360 game with satisfactory AA, so for now the cheaper AA thing doesn't seem to be true.

are you going only by screen shots/vids or do you own an Xbox 360 (with HDTV)? Because I am quite satisfied (and surprised) with the level of AA in many 360 games while playing them in real life regardless of their method of attaining it.

If it's the latter, I certainly respect your opinion to be different than mine, if it's the former, not a fair comparison IMO. :)
 
Im not sure how anyone can pick out an aspect of a game and say "Thank god the edram gave me that". Im rather impressed by many titles available for the 360, whether or not its because of the uber bandwith saving super fast on die EDRAM I dont know.
 
It should be noted that the 256GB/s of bandwidth between those ROPs and the eDRAM isn't compressed. AFAIK, and somebody correct me if I'm wrong, ATI and nVidia have both invested quite a bit in compression between the ROPs and main memory for all PC GPUs. 256GB/s can essentially be completely used, and as such it isn't "overkill"--because it's exactly the amount necessary. 8 bytes per sample (4 bytes color/4 bytes z), 4 samples (4xAA), times 2 (read-modify-write), with 8 ROPs, running at 500MHz - 8*4*2*8*500000000 = 256GB/s. But, again, there isn't any form of compression there.

And ultimately, the result of eDRAM is that the system isn't so completely, horribly bottlenecked by a SINGLE 128-BIT BUS as it would have been without the eDRAM. The eDRAM was a design choice, just like using two pools of memory in the PS3 was a design choice....without eDRAM, the system would look quite different from the way it is now. eDRAM wasn't ever going to make "brand new effects" possible... just make some of them feasible, assuming they were worked into the engine at the proper time, and the tradeoffs were acceptable. Because there's always some kind of tradeoff.

This topic... I think it would be far better to ask if the tesselation unit, memexport, or the features where Xenos is performant (i.e., pixel shader branching, relative to pre-G8x +/R5xx + GPUs, and vertex texture fetch/filtering) have had any practical use in a game out/in development YET, and if so/if not, if it can go anywhere in the future/where it can go in the future.

(EDIT: and to clarify, I mean if it makes some things that weren't possible/feasible performance wise possible due to a different method, and so on, along those lines)

But that's just me... and I'm as crazy as they come, right?

It probably needs a new thread but I would definatly be interested in the effectiveness of the 360's tesselation unit, especially in comparison to the G80 GS. I understand a dedicated TU like that in Xenos is a feature we may not see in PC GPU's until DX11 :oops:
 
I think when 1st party game start moving on from UE3, to proprietary engines is when you'll see much better results. UE3 is sortof a blessing and a curse at the same time :devilish:

It really was not built for Xenos, yet it dominates the 1st party lineup.

I'm not so sure. I think you could argue that a lot of the best graphics on 360 are UE3. Gears, Mass Effect, etc. In fact, there doesn't seem to be many 360 games with outstanding GFX that ARENT UE3.

If anything my feeling is, UE3 runs great on 360, most likely better than it does on PS3 from what limited info we have so far, and that it has in a way single handedly "rescued" the 360 from suspicions of hardware inferiority.

And do you know what I find interesting about UE3? It doesn't use the edram.. Many look at that as a bad thing, but I look at it and wonder if not messing with tiling is in fact part of the reason it looks so good. Remember UE3 doesn't use AA, so it's not messing with tiling, the whole FB fits into the EDRAM, which probably greatly simplifies things.

I kinda think the 360 GPU is awesome, apart from the EDRAM. I mean, it does have 48 shaders and it is unified. The EDRAM is a necessary part of the design because they dont have a 256 bus, but I dont know that on balance it's a plus or minus. I think it might kinda even out. We dont know what the Xenos would do in a traditional 256 bus, maybe it would do better.
 
The AA on the 360 is all over the place due to the developers, but that's not the 360s fault. Picking one small cog in the system like the EDRAM and claiming it should be visible in the end result, like black and white difference seems naive.

What I want to know is now that the 360 has made the 1080P plunge, can the EDRAM be used at all or is 10MB too small for anything that might help?
 
Last edited by a moderator:
To honest, I really don't see the big point of the EDRAM. It's supposed to save money by cutting the framebuffer bandwidth out of the total bandwidth budget, but in return it ads the cost of the extra die and the special packaching that it requires.

I have yet to see a 360 game with satisfactory AA, so for now the cheaper AA thing doesn't seem to be true.

This is true, in a way they traded an extra bus and perhaps a simpler layout design for some EDRAM. But, the plus of that is eventually silicon drops in price more than any compnonent, silicon in a system eventually becomes literally dirt cheap. So even if it was a "draw" performance wise, it might be a good move long term.

Also, I dont know if you're entirely aware, but 256 busses apparantly prevent die shrinks below a certain point, and that's why you do not see them in either console.

I think, EDRAM is going to be the absolute console standard by next gen, and I dare anybody to disagree. Why? Because process shrinks will afford enough to render at 720P with AA without tiling, while EDRAM is going to be the only way to get the huge amounts of BW that will be needed. If console makers were shy of 256 buss es this gen for cost reasons, imagine how out of the question 384 and 512 bit that G80/R600 use will be.

As for your AA statement, I hate to be controversial, and the 360 AA has taken plenty of flack, but I really feel like 360 games have a cleaner look to them in general than PS3 games so far. Motorstorm and Lair come to mind. Whether the AA is "satisfactory" or not.
 
Last edited by a moderator:
What I want to know is now that the 260 has made the 1080P plunge, can the EDRAM be used at all or is 10MB too small for anything that might help?

The 10MB is 'suitable' for 1080p, it's simply that the number of tiles requirement climbs.
 
Back
Top