The pros and cons of eDRAM/ESRAM in next-gen

XB1 has a memory architecture main ram / Video Ram very similar to PS2. With PS2 developers used the Vram to store the framebuffer and mainly the textures of their games.

When I see Forza and Titanfall with their average textures/shaders it reminds me the first PS2 games when textures had to be stored in the low 4MB VRam to have optimal performance.

X360 is another matter entirely because its edram can only be used as a framebuffer. it's not technically a main ram/video ram architecture. I see the 10MB of the X360 as a GPU cache specialized for framebuffer operations but the X360 has a real unified memory.

Where there is a problem with XB1 is with the ratio of Vram/main ram. When the ratio was 1/8 for the PS2, the ratio is 1/256 for XB1. When PS2 devs had trouble to fully store their texture for each levels in the 4MB vram we can imagine how it's must be difficult on XB1 to store the many textures/shaders.

32MB is not enough for next gen games, even with an old 3D engine. They have to limit their games with double buffers (screen tearing), have low quality textures (Titanfall/Forza) or just have bad performance if they can't store the texture/high bandwidth assets on the fast VRam (COD Ghosts, Battlefield).

I think after the XB1 no more hardware will ever try the main ram/Vram memory architecture. With the current techniques (deferred rendering, temporal AA, Triple buffering) developers more and more need full bandwidth on the whole memory: A unifed memory.

The ESRAM is only 1.6x faster than the main memory on the Xbox 360, while the EDRAM in the Graphics Synthesizer is 40x faster than the connection the Emotion Engine.
 
The ESRAM is only 1.6x faster than the main memory on the Xbox 360, while the EDRAM in the Graphics Synthesizer is 40x faster than the connection the Emotion Engine.

You mean Xbox One? And using updated figures (204 GB/s) it's 3X faster.
 
Why would a CPU limitation result in lower rendering resolutions? The games are running at 60fps, the question is why is the Xbox One struggling to get above 720p at that framerate? This includes MGSV, CoD:Ghosts, Battlefield 4 and Titanfall.

Why? For the thousandth time, because the X1 was not designed to be the most powerful console of this generation, now, can we move on?
 
Why? For the thousandth time, because the X1 was not designed to be the most powerful console of this generation, now, can we move on?

That is not an answer. Why does Xbox One's resolution deficit resembles the theoretical power differential in 30fps games but grow much larger in 60fps games? Nothing we know about the different systems provides an obvious answer to that technical question.
 
What's interesting to me is that whatever bottlenecks exist in the Xbox One, they appear to get worse the higher the framerate being targeted. 30fps games seem to be able to get to 900p or even match the 1080p of PS4 versions, but 60fps games have had the largest resolution disparity. You'd expect the relative performance to be largely fixed allowing devs to trade visual fidelity for framerate freely, but in the case of the Xbox One if you want your game to be 60fps it struggles to get above 720p. That is, frankly, alarming.

Hmm well, I guess, Ghosts and BF4, now MGS GZ are 720/60-ish on One? One of those is also only 900P on the competitor.

Ghosts ran better on One too, at least until that patch. There's also Forza 5, still the only actual locked 60 1080 retail exclusive. And quite a few 1080/60 on One sports games etc.

Interesting thought but I think we'd need more data.
 
That is not an answer. Why does Xbox One's resolution deficit resembles the theoretical power differential in 30fps games but grow much larger in 60fps games? Nothing we know about the different systems provides an obvious answer to that technical question.
Well 60fps pretty much doubles the main memory bandwith requirements and you are stuck with 68GB/sec on the Xbox One. So that there might be the bottleneckwhere things start getting strained.

Well lets break it down. In Xbox memory system is split into two sections. You have your render targets and depth buffers or shadowmaps filling up your 32MB of esram.

Texture reads, Frontbuffers, normal maps, cpu bandwidth demands etc all increase ddr3 bandwidth demands when a game is running at 60fps instead of 30fps.

Shader demands also double, perhaps that is there the bottleneck is for some of these developers.
 
Perhaps pushing for 1080p could be seen as worthwhile on the PS4 to avoid the upscaling IQ hit (even if it comes with large frame drops ala Tomb Raider), but that as the Xbox One is never going to get there you might as well prioritise stability and stick with a resolution that will play nice with 720p outputs.

Or perhaps higher resolutions with the same textures and geometry simply shift more of the bottleneck onto the ROPs?

If it's anything ROP related then the BW of the esram will help, but having to juggle things around in such a tiny amount of memory probably won't.
 
The ESRAM is only 1.6x faster than the main memory on the Xbox 360, while the EDRAM in the Graphics Synthesizer is 40x faster than the connection the Emotion Engine.

???
the esram is quite fast. the bandwith of GDDR3 ram in xbox 360 is only the absolute peak bandwidth in the absolute best case. you just can't compare only the numbers of Dram and sram.
Worst case of the esram in the xbone is 109GB/s (normally it should be arround 150-160GB/s).
Worst case of the GDDR3 Ram in the 360 ist way lower. Even so, the 109GB/s is only for 32 MB of Memory which makes 1,7GB for each esram-block (512 KB blocks) in worst case scenarios. the Dram modules have a much lower BW/MB. Just break it down to the modules and you will see, the bandwidth of dram is much lower.
same applies to the edram in the 360, the edram is fast, but is only applies in optimal cases. even thow, developers hadn't have full bandwidth/control over the 360 edram.
then there is the damn latency you have with dram. the dram works with textures and big reads/writes (because these are almost big operations with just read or write, but not mixed), but everything that is small costs you cycles which means you loose bandwidth (even more on GDDR memory).

the esram is not a solution for everything, but you can do some really fast stuff with it. but you need to optimize your code, because you only have 32MB of it. 64 MB would be better.
 
Simply put: 720p @ 60 fps is more heavy for both the CPU and GPU than 1080p @ 30 fps. If there are performance differences between the platforms, the differences will be more visible in the 60 fps game, because it taxes the system more.

It's way too early to judge the capability to reach 1080p @ 60 fps on either platform. Launch titles are always developed using beta hardware, beta software and in very tight timing schedule (as console launch dates are fixed). This is especially true for hardware that has unconventional memory system like Xbox One (or PS3). It takes time to get most out of the hardware. Compare the launch titles of last generation (PGR3, CoD2) to the recent games (TLoUs). The difference is huge.
 
That is not an answer. Why does Xbox One's resolution deficit resembles the theoretical power differential in 30fps games but grow much larger in 60fps games? Nothing we know about the different systems provides an obvious answer to that technical question.

What does that even mean? Do you not understand that with a 60fps, everything's essentially doubled from 30fps, and that going from 720p to 1080p, only the last few stages on the rendering part is stressed, for example, you don't run double the vertices going from 720p to 1080p. Do you have any technical skills on real time graphics at all?
 
Having a single unified pool of memory to manage was the most requested feature from developers according to Sony.

So my question is what would be easier to program for :

Xbox One : (32MB ESRAM + 8GB DDR3) or a "Hypothetical Xbox One" (with a separate GPU die and memory bus) so (4GB for CPU + 4GB for GPU).

Is juggling the small 32MB ESRAM with a larger unified pool (8GB) easier or more difficult than a completely separate CPU and GPU memory pools each at 4GB?
 
Last edited by a moderator:
Simply put: 720p @ 60 fps is more heavy for both the CPU and GPU than 1080p @ 30 fps. If there are performance differences between the platforms, the differences will be more visible in the 60 fps game, because it taxes the system more.

It's way too early to judge the capability to reach 1080p @ 60 fps on either platform. Launch titles are always developed using beta hardware, beta software and in very tight timing schedule (as console launch dates are fixed). This is especially true for hardware that has unconventional memory system like Xbox One (or PS3). It takes time to get most out of the hardware. Compare the launch titles of last generation (PGR3, CoD2) to the recent games (TLoUs). The difference is huge.

I have to agree, there are undoubtedly challenges adapting to making the memory architecture of the XB1 work but then again I think back to the problems working out how best to use SPEs on the PS3. One of the keys to unlocking performance of those things was not to run code on the SPEs that required them to talk to main RAM much but instead let them stay within their scratchpad IIRC.

Splinter Cell Double Agent on PS3 is a great example of an early title that failed to tune the software to the target(Digital Foundry Analysis July 2007). That damn game made me worry an whether I'd be stuck with crud UE3 ports for the lifetime of the PS3, it got better but UE3 has never really got on with the PS3 IMO. I'm sure once middleware vendors and MS tools teams have time to really tune their code and compilers things will improve.

Of course the question is by how much, I don't think any of the current titles or those released until the Christmas season will really reveal much. If the 3rd party titles with a years worth of final hardware to play with can't close most of that gulf then it's going to be very bad indeed for MS. Of course things get better over the whole lifetime but the gains tail off as diminishing returns kick in and the PS4 and XB1 are otherwise too close for there to be much more to tap once ESRAM access patterns are optimised. Unlike Cell which was/is a grab bag of oddness that took forever to understand but once it was could crank out some outsized performances.
 
Hmm well, I guess, Ghosts and BF4, now MGS GZ are 720/60-ish on One? One of those is also only 900P on the competitor.

Ghosts ran better on One too, at least until that patch. There's also Forza 5, still the only actual locked 60 1080 retail exclusive. And quite a few 1080/60 on One sports games etc.

Interesting thought but I think we'd need more data.
BF4 also ran at ~10fps higher on PS4, a bigger performance delta than CoD.
 
BF4 also ran at ~10fps higher on PS4, a bigger performance delta than CoD.

"10fps higher" could be a very large or very small delta depending on the two framerates being compared. Going from 90fps->100fps is a very small gap, but going from 10fps->20fps is huge. This is why it's always better to use frame times in milliseconds.
 
???
the esram is quite fast. the bandwith of GDDR3 ram in xbox 360 is only the absolute peak bandwidth in the absolute best case. you just can't compare only the numbers of Dram and sram.
Worst case of the esram in the xbone is 109GB/s (normally it should be arround 150-160GB/s).
Worst case of the GDDR3 Ram in the 360 ist way lower. Even so, the 109GB/s is only for 32 MB of Memory which makes 1,7GB for each esram-block (512 KB blocks) in worst case scenarios. the Dram modules have a much lower BW/MB. Just break it down to the modules and you will see, the bandwidth of dram is much lower.
same applies to the edram in the 360, the edram is fast, but is only applies in optimal cases. even thow, developers hadn't have full bandwidth/control over the 360 edram.
then there is the damn latency you have with dram. the dram works with textures and big reads/writes (because these are almost big operations with just read or write, but not mixed), but everything that is small costs you cycles which means you loose bandwidth (even more on GDDR memory).

the esram is not a solution for everything, but you can do some really fast stuff with it. but you need to optimize your code, because you only have 32MB of it. 64 MB would be better.

Sorry I meant the Xbox One
 
"10fps higher" could be a very large or very small delta depending on the two framerates being compared. Going from 90fps->100fps is a very small gap, but going from 10fps->20fps is huge. This is why it's always better to use frame times in milliseconds.

Regurgitate 3 month old data, fun.
 
Regurgitate 3 month old data, fun.

Do you know who MJP is to be talking to him in that manner?

The fact of the matter is, BF4 when trying to keep 60fps stays at a consistent 10fps lead in almost every scenario, growing to a 15 fps lead when in higher stress scenes at a 40% higher resolution. This we know
 
Do you know who MJP is to be talking to him in that manner?

The fact of the matter is, BF4 when trying to keep 60fps stays at a consistent 10fps lead in almost every scenario, growing to a 15 fps lead when in higher stress scenes at a 40% higher resolution. This we know

No I don't know who MJP is, and read the context, it wasn't even directed at MJP.
Again, you are repeating something that we knew 3 month ago, I simply see no value of that.
 
No I don't know who MJP is, and read the context, it wasn't even directed at MJP.
Again, you are repeating something that we knew 3 month ago, I simply see no value of that.

Your frustrated sniping in this thread is the only thing that has no value in this discussion.
 
Back
Top