The pros and cons of eDRAM/ESRAM in next-gen

It would have been pretty bad, but that's not how things turned out, which then indicates the ESRAM isn't that underutilized.
 
eSRAM Accelerates Memory Ops, Devs Scaling Down Res For Better Performance

http://gamingbolt.com/xbox-ones-esr...caling-down-resolution-for-better-performance

“On Xbox 360 eDRAM usage was required for every rendering pass, so it was crucial to fit there. That’s also why many games had strange subHD resolutions like 1280×672,” he explains.
“With Xbox One it’s a bit different. eSRAM works like an optional additional cache. It just accelerates selected memory operations and there isn’t some hard limit like on Xbox 360. Of course people are scaling down resolution, because the more stuff fits in eSRAM the better performance. We manually manage eSRAM during every rendering pass, moving data between eSRAM and DRAM from time to time. Every time trying to fully utilize available eSRAM for bandwidth heavy operations.”
 
It provides BW which, if that BW came from the main RAM, would make ESRAM redundant. As ever, the pros/cons argument comes down to cost to implement a high-BW system.
 
To me the ESRAM always sounded to me like MS's cost effective choice to gain a BW lead over competition under the assumption that the competition wouldnt be able to implement a different cost effective solution like GDDR5 and instead stick with GDDR3. But eventually competition managed to implement cheaper what MS originally was expecting to be expensive to implement
 
To me the ESRAM always sounded to me like MS's cost effective choice to gain a BW lead over competition under the assumption that the competition wouldnt be able to implement a different cost effective solution like GDDR5 and instead stick with GDDR3. But eventually competition managed to implement cheaper what MS originally was expecting to be expensive to implement

I thought it was because they decided early on that their setup required 8GB of RAM, and at that time it seems improbable that this could be achieved with anything other than DDR3. This was not unrealistic, given that Sony could only basically confirm for certain that 8GB of GDDR5 was feasible in early 2013.

So given those parameters, improving on DDR3's bandwidth had to come from something like DDR3, or they'd have had to split memory pools, which is not a popular choice these days (unless on PC perhaps ;) ).
 
I thought it was because they decided early on that their setup required 8GB of RAM, and at that time it seems improbable that this could be achieved with anything other than DDR3. This was not unrealistic, given that Sony could only basically confirm for certain that 8GB of GDDR5 was feasible in early 2013.

So given those parameters, improving on DDR3's bandwidth had to come from something like DDR3, or they'd have had to split memory pools, which is not a popular choice these days (unless on PC perhaps ;) ).

I dont think we are saying different things here ;)
 
To me the ESRAM always sounded to me like MS's cost effective choice to gain a BW lead over competition under the assumption that the competition wouldnt be able to implement a different cost effective solution like GDDR5 and instead stick with GDDR3. But eventually competition managed to implement cheaper what MS originally was expecting to be expensive to implement

Absolutely agree with that. The esram was never a solution hat gives better performance than GDDR5 memory, but instead simply improves the DDR3 memory performance. Perhaps power consumption/heat were also of concern, and that may have also been a factor, but I too think Microsoft was pretty shocked when Sony went with GDDR5 memory. Unfortunately, we are now in a world where propriety hardware is despised by the majority of developers. You cant blame them really, they are tasked with developing for multiple platforms, and learning to properly manage a small pool of memory that yields no benefit on the PC and PS4 seems like more of a hassle than its worth. You have to realize that developers resources are not unlimited, and when it comes down to it, do they really want to spend lots of man power on getting the X1 port optimized to acquire that 1080p resolution when with no extra effort, they can simply scale down to 900p or even 790p to get the performance they are looking for. Digital Foundry is great, but I don't think too many developers care as much about staying locked on the target framerate. The fact that the X1 version dips more often than the PS4 version isn't going to be a major concern unless its seriously effecting gameplay. DF is a good tool, but lets be real here, they recently acted as if a 1 frame drop from 60 to 59 was actually worth mentioning.
 
Going to have to ask the obvious question here but in an imaginary scenario where ESRAM was 256MB large and produced the same bandwidth would all of you still see ESRAM as a con, or a quick patch to DDR3?

The setup that MS is looking for according to these conference notes are to use DDR3 as fast cache, and ESRAM for main processing.

So you are basically going from HDD -> RAM -> ESRAM. With a 5GB buffer sitting in DDR to provide for ESRAM. Stream from HDD -> RAM with no hiccups noticed, and RAM -> ESRAM would stream very quickly.

If space wasn't the constraint on ESRAM, it wouldn't be as hamstrung when dealing with deferred/tiled-deferred renderers running multiple 32-bit G-Buffers @ 1080p resolution.

Aside from that one fact, I can't see any other problems with the ESRAM setup, nor do I suspect developers find it any more 'complex' that anything else they've worked with previously.

The complexity is getting a renderer to work fit in the constraints of 32MB @ 1080p with multiple dynamic lights at a frame rate of 30 to 60fps. And from what I've read here, Forward+ could be a solution to such a problem - however that isn't a solution for games that have started 2-3 years ago leveraging technologies rooted in completely different restrictions and requirements.
 
Going to have to ask the obvious question here but in an imaginary scenario where ESRAM was 256MB large and produced the same bandwidth would all of you still see ESRAM as a con, or a quick patch to DDR3?

The setup that MS is looking for according to these conference notes are to use DDR3 as fast cache, and ESRAM for main processing.

So you are basically going from HDD -> RAM -> ESRAM. With a 5GB buffer sitting in DDR to provide for ESRAM. Stream from HDD -> RAM with no hiccups noticed, and RAM -> ESRAM would stream very quickly.

If space wasn't the constraint on ESRAM, it wouldn't be as hamstrung when dealing with deferred/tiled-deferred renderers running multiple 32-bit G-Buffers @ 1080p resolution.

Aside from that one fact, I can't see any other problems with the ESRAM setup, nor do I suspect developers find it any more 'complex' that anything else they've worked with previously.

The complexity is getting a renderer to work fit in the constraints of 32MB @ 1080p with multiple dynamic lights at a frame rate of 30 to 60fps. And from what I've read here, Forward+ could be a solution to such a problem - however that isn't a solution for games that have started 2-3 years ago leveraging technologies rooted in completely different restrictions and requirements.

Well what if the competing design was 64 GB of GDDR5? What if there were 64 Jaguar cores? What if everything was 8x the actual spec?
 
Going to have to ask the obvious question here but in an imaginary scenario where ESRAM was 256MB large and produced the same bandwidth would all of you still see ESRAM as a con, or a quick patch to DDR3?

The setup that MS is looking for according to these conference notes are to use DDR3 as fast cache, and ESRAM for main processing.

So you are basically going from HDD -> RAM -> ESRAM. With a 5GB buffer sitting in DDR to provide for ESRAM. Stream from HDD -> RAM with no hiccups noticed, and RAM -> ESRAM would stream very quickly.

I like the shift in thinking here, Since it's all on a soc it would seem to be a nice drop in solution for an AMD APU, if it had API support with say mantle something like that solution could be a nice differentiator
 
Well what if the competing design was 64 GB of GDDR5? What if there were 64 Jaguar cores? What if everything was 8x the actual spec?

Fair, it's not and it's a waste of time to discuss what ifs since it's not.

The discussion is supposed to be about the pros and cons of ESRAM and all I've read are _cons_ for the most part all related to 1 CON and that is the fact that it's not large enough. Granted many feels it outweighs the list of PROs, that's cool and I'm not as knowledgeable as you guys to debate that point, but it's been discussed and beaten to death.

We've read a lot of 'it's their solution to pairing with DDR3' but even that particular point may not be accurate according to BKillian's post.

Here are some pros, and all I'm asking is whether they've been dismissed too easily in the face of the size issue?

1024 wide bus, read/write same cycle, max throughput of 2048 bits per clock cycle.
can perform latency sensitive tasks
built into the SOC, cooling can be kept central to the rest of the console, no need to passively or actively cool external RAM.
guaranteed bandwidth and your 7/8 r/w cycles 1 bubble cycle vs CPU/GPU bandwidth contention || unknown RAM available due to OS
 
The debates about a larger anything in the new consoles will always be constrained by what it does to the overall cost of the platform. A larger ESRAM would alleviate a number of challenges from the current setup but where the 'real world' bandwidths for the ESRAM/DDR3 blend and GDDR5 are close then the extra faff of managing data in and out of ESRAM would still seem to make it the less desirable solution.
 
The discussion is supposed to be about the pros and cons of ESRAM and all I've read are _cons_ for the most part all related to 1 CON and that is the fact that it's not large enough. Granted many feels it outweighs the list of PROs, that's cool and I'm not as knowledgeable as you guys to debate that point, but it's been discussed and beaten to death.

Embedded memory isn't a Con. It was a cost effective solution to improve graphics performace due to MS's choice of unified memory, just like the 360.

The discussion of Pro/Con revolves around what the competition is using and IMO the entire discussion is moot. MS ultimately chose the embedded memory paired with a 12 CU GPU because they had to include: Kinect, IR support, HDMI-IN support in the retail box and keep the price south of ~500 dollars.
 
Do the costs really matter in the beginning? The lifetime of the current design is probably just 2 years anyway. Then you do the redesign for the mass sales version and then stacked memory and DDR4 are there to be used. If they used ESRAM just for costs only for this small window it makes no sense to me.
 
Console redesign lowers the cost by going to a smaller process and simplify the design. Changing the design which adds complexity is not how it works.
 
Back
Top