The pros and cons of eDRAM/ESRAM in next-gen

Shifty Geezer

uber-Troll!
Moderator
Legend
I feel this topic needs its own discussion rather than be buried in next-gen hardware prediction thread. eDRAM has proven invaluable for PS2 and XB360, but does it still have a future? As I see it, the move towards deferred rendering makes eDRAM less useful, so it may be time to drop it. But then I've no idea of the costs between plenty of eDRAM and plenty of main RAM BW.

So, is there a future for eDRAM? How much would be ideal? Should we go with TBDR on a smaller tile, or if we go TBDR couldn't we just use SRAM? Are any other platforms using eDRAM, such as mobiles?
 
I guess we might consider for a moment that the 360S' current die size measurements suggest MS is still on 65/55nm eDRAM, not yet on 40nm or much less 28nm (whenever the foundries come up with a viable implementation as it lags conventional CMOS transistor tech for quite a while). What I mean is that there is still a possibility of seeing a reasonable amount of eDRAM @ 28nm - assuming that there is enough time for that to become viable for manufacturing en masse. 2013-4 would be much safer since we haven't heard much news since 40nm eDRAM was first made available in mid-2010 - if we take 10MB@90nm as a baseline budget for physical size.

Just throwing that out there, but does that make sense? I'm trying to consider timing & how much MS decided to dedicate to another chip to see if the tech will be feasible for mitigating the enormous demands of deferred rendering. Accounting for MSAA will make things worse as will mandating 1080p, but of course the pool would be large enough that frame buffer configs could be rather flexible compared to the paltry 10MB - mixed resolutions and multisamples, that is to avoid tiling.
 
I'm trying to decide what'd count as a 'reasonable amount' of eDRAM. If we're only rendering one buffer at a time, you could get away with something like 24 MBs, right (1080p 64 bits per pixel + 32 bits Z ~ 24 MBs). That's sans AA samples. We'd push that up to 32 MB with 2x, and not be playing nice with deferred rendering. Which is where TBDR makes more sense (and with those four letters I summon Lazy8, right? ;)) But then why not use SRAM for that?

What would people consider a reasonable amount of eDRAM?
 
64 MB would be a reasonable amount of eDRAM. How much would that cost transistor wise? The PS2 showed you could do quite a bit with eDRAM and that had 4 MB, albeit 32 MB main memory. It was pretty epic for its time. That machine did some amazing things through its lifespan what with awesome guys like Fafalada pushing its boundaries. I feel the 360's eDRAM should have been completely open for devs to do what they want with it, but still has proven its worth. Why 64 MB? Because it's a nice number, perhaos a bit large, but nice none the less. It will help alleviate bandwidth problems all that much more, and if open will allow devs to exploit hardware all the better. 64 MB eDRAM and what, like a 1 to 2 GB of DDR3?
 
According to NECs edram fab process 28nm should be ready ie it's being used for WiiU edram so I'd say 80MB edram would be a good number for a nextgen GPU. Later revisions of the CPU and GPU would allow single package integration like current Xbox Vahalla APU.

With that said if a PowerVR GPU makes it into a nextgen console then there is no need for edram on the GPU side. On the CPU side I could see edram being used for large L3 cache.
 
Last edited by a moderator:
eDRAM (or tile rendering) has a massive advantage when it comes to transparencies of course. That's always been the PS3s biggest achilles heel IMO; relatively paltry bandwidth coupled with no eDRAM gives bad performance when lots of transparent pixels need to go on the screen.

There's been talk about games using Cell APUs to accelerate such tasks and so on, and I don't know if any games actually implemented that or not; you'd need to read out the game's Z-buffer to properly composite the scene too and it sounds rather messy on the whole. Better to just let the GPU have the resources and oomph it needs for the task.

I'd love to see eDRAM on next-gen consoles. That's the only genuine way they could possibly try to keep up with PC graphics. With all higher-end PC GPUs (and many mid-range cards too) having 100+ GB/s bandwidth, a cost-sensitive console would be hopelessly outmatched from the outset if just paired with a simple DRAM bus, maybe even as low as a 128-bit interface.

I wouldn't want to live another half-decade plus of that. It would really hold back gaming and graphics development. eDRAM is pretty much a neccessity, IMO. (Spoken without any real qualifications other than general gaming enthusiasm, of course... ;))
 
Transparency does seem the major advantage of eDRAM. PS2 rocked with overdraw. PS3 is lacking by comparison. But I'm not sure transparency is massively valuable going forwards. Things like smoke can be rendered better with volumetric calculations. Doesn't need full volumetric dynamics to create patchy fog and mushroom clouds with a simple occlusion based on a trace through the volume. Particles could also be rendered as a post effect in deferred rendering. You have a buffer of particle points and for each one draw a small sprite, then composite. So the issues of transprency draw can often be overcome with processing power and algorithms.
 
So the issues of transprency draw can often be overcome with processing power and algorithms.
Meh. Anything and everything can be overcome with processing power and algorithms.

Nitpicking of course. :)
 
Meh. Anything and everything can be overcome with processing power and algorithms.
Yes, but now we'll actually have processing power and algorithms (maybe) to use alternative methods, whereas previously we needed overdraw. Hence the value of eDRAM is diminshed - is it better to spend those transistors on eDRAM for massive overdraw, or more flexible processing power?
 
I would make assumption EDRAM is good if it is more flexible than in xbox360 implementation and if the amount of it is sufficient. I would make assumption that rendering resolution for games Pushing the boundaries would be either 720p or 960*1080(easy scale to 1080p and 720p). I wonder how much EDRAM would be needed for 960*1080 resolution to get past those "resolve to main memory", "you have to tile uncomfortably" and so on issues.

What about EDRAM/large caches to help cpu/gpu talk and divide work. It would be pretty useful to pass significantly sized buffers between computing units without going through main memory.
 
Yes, but now we'll actually have processing power and algorithms (maybe) to use alternative methods, whereas previously we needed overdraw. Hence the value of eDRAM is diminshed - is it better to spend those transistors on eDRAM for massive overdraw, or more flexible processing power?

If you don't have the edram then some cost will have to go towards beefing up the main memory bus, so you may not end up with any more to spend on transistors elsewhere. Maybe you'd actually end up with less - I dunno. Would a 256-bit, 8 memory chip 360S have been cheaper than the 360S as it currently stands with its small lump of on-package edram?

I've always thought the real benefit of edram was saving money elsewhere and not being lumbered with much higher costs throughout the lifetime of the platform in order to get an acceptable amount of memory bandwidth.

The PS3 is the only system this generation without embedded video memory, and the Xbox was the only one last generation. Both stuggle/d to be cost competitive with all their rivals and from the things I've read here the wider memory buses (two in the case of the PS3) and greater number of physical memory chips have hindered rather than helped.
 
Guys, i may be missing the point here, but why is everyone preparing edram sizes according to 1080p??
These consoles have got to last another 8 years or so, were not talking about whats exceptable for 2012...

.. Surely 1080p stereoscopic 3d will be the minimum they are aiming for!?
-Wont 2560 x 1600 OR 2560 x 1400 be the standard in 18 months or so??

I know you have to consider extra procseeing power like AA AF ray tracing, testerlation and other post processing effects, but the graphics will be based off high end modern graphics cards, which can already do the above at 1080p/60fps comfortably..and even at higher res with out so much eye candy.

If you take pretty much what we are expecting, 7850 class gpu, and factor in with out the windows/api over head consoles can extract 5 times the performance of that over time, dont you think the edram/ram/bandwidth has to reflect the high end of what the chip is capable of?.

Else developers are going to constrained with resolution 12 months into 'next gen'.
We are going to need at least 4gb ram, and a shed load of bandwidth, whether thats edram or otherwise.

Thought id throw that one in the mix..
 
.. Surely 1080p stereoscopic 3d will be the minimum they are aiming for!?

I think you're going to be disappointed!

-Wont 2560 x 1600 OR 2560 x 1400 be the standard in 18 months or so??

1920 x 1080 was a standard in 2005. Have you seen how the COD games sell running at about a third of that resolution? And I'm certainly not knocking those games - nice to see 60fps show it's face from time to time. :)
 
Guys, i may be missing the point here, but why is everyone preparing edram sizes according to 1080p??
These consoles have got to last another 8 years or so, were not talking about whats exceptable for 2012...

.. Surely 1080p stereoscopic 3d will be the minimum they are aiming for!?
-Wont 2560 x 1600 OR 2560 x 1400 be the standard in 18 months or so??

You always have devs who either decide to push more pixels or fancier pixels. There are plenty of games running in super low resolutions today, like 600p. Upping the resolution would basically say "we will push more pixels that are less computing intensive".

So far console history is full of dropping resolution and framerate to achieve fancier pixels. I don't see any reason to expect anything more (relatively) from next gen. Probably simpler games will run 1080p or even 3d 1080p. Games that really push the envelope will take hit in resolution to favour fancier graphics. Scaling from 960x1080 to 1080p will look Good.

As for 3d I expect that there will be fancy postprocessing algorithms(fitted with proper 2d rendering to begin with) and selective true 3d rendering to lower the penalty of 3d considerably. It would be plain stupid to render everything twice in full resolution :=) Perhaps these algorithms also use multiple frames to achieve nice 3d conversion.

going from 600p to 1080p alone would require (1920*1080)/(1024*600) = 3.3 times more computing power without adding any new processing. Double that for naive 3d.. And then think what kind of HW there can be in next boxes.... Sub 1080p rendering will be a given as well as 30fps instead of everything 60fps.

Oh, and there is plenty more tricks to add detail where it belongs and render those thing in lower res that can get away with it. It might be impossible to tag any single resolution to games in future because different things come in different resolutions/quality(i.e. selective enabling on anti aliasing methods, different rendering resolutions for different things, post processing stuff from multiple frames and so on)
 
Last edited by a moderator:
Well im just basing my assumptions off whats capable now, and then factoring in -api over head + new techniques.

A 360 game is generally native 720p right? admittidely thats without many expensive after effects/goodies. but thats on 7 year old hardware a paltry 512mb ram and a puny in order cpu.

But if you add in graphics chips that are capable of 1080p eyefinity setups with many after effects of today, Quad core OoO cpus + cache and then put in the advantages shown above, surely its not a stretch to think these machine will be aiming for higher resolutions over the next 8 years??

Of course people havn't mentioned the ANA scaler chip and what effect that has on upscaling, and how much a newer one of those would possitively effect the bandwidth constraints?

I think they will have to plan for 8 years form launch, and that means including the over head to pull those resolutions out of the bag without too much comprimise.
After all look what happened to all the theoretical power of the ps3?? hamstrung by ram and bandwidth.....

No i think we need either a large memory controller or a large enough slab of edram to accomodate the above resolutions..
This attatched to 4gb ram..amybe im being too optimistic...:p
 
But if you add in graphics chips that are capable of 1080p eyefinity setups with many after effects of today, Quad core OoO cpus + cache and then put in the advantages shown above, surely its not a stretch to think these machine will be aiming for higher resolutions over the next 8 years??
You don't render games at what the hardware can do, but what the display will show. 1080p native TVs will be the target. There are no higher resolution TVs until you get to 4k, which is ridiculous in both amount of pixels to render and nicheness of the market. Ergo it doesn't make sense to target higher than 1080p. 720p with dodgy IQ is the norm now. 1080p with 8xMSAA will be excellent quality next-gen along with all the added eycandy when you look at the displays these boxes will be rendering too. Those serious gamers wanting massive resolution will be on PC.

As for stereoscopic rendering, that rather depends on how it's implemented, but a pure 2x viewpoint would be rendered more like 120 fps (60 fps camera update, two images rendered from two cameras, piece both images together for the display feed) than a single 1920 x 2160 image. Hence a 1080p workspace would still be the target for eDRAM
 
The final display resolution is actually fairly irrelevant to how much memory you're going to touch when rendering a scene though. MSAA obviously scales up the total memory footprint and bandwidth independent of the resolved display resolution for instance, and that decoupling will only continue. Different terms of the shader will probably be evaluated at varying frequencies much more commonly in the future.

But yeah, agreed that blending/"transparency" is an excessively-poor justification for EDRAM. If you have a ton of overdraw, you're doing it wrong. If you need tons of blending, switch to binning/TBDR. Even in software (and even on the CPU!) it's going to end up faster and a far better use of power than any attempt to provide a high-bandwidth view of the entire framebuffer to every shader invocation.
 
How much bandwidth is enough :?: in light of PC GPUs which are mostly << 256GB/s for example. A 256-bit bus would be necessary in the absence of edram to even get into the mid-100GB/s assuming high end GDDR5. I'd wonder about latency becoming some sort of issue for CPU ops in that case if we're talking about UMA as well. I mean, it's fine to look at advantages and disadvantages of EDRAM for a particular usage scenario, but there are many more factors that surround the choice from architectural design of the rest of the system to associated costs.
 
How much bandwidth is enough :?: in light of PC GPUs which are mostly << 256GB/s for example. A 256-bit bus would be necessary in the absence of edram to even get into the mid-100GB/s assuming high end GDDR5. I'd wonder about latency becoming some sort of issue for CPU ops in that case if we're talking about UMA as well. I mean, it's fine to look at advantages and disadvantages of EDRAM for a particular usage scenario, but there are many more factors that surround the choice from architectural design of the rest of the system to associated costs.


The historical trend for game consoles according to Rambus is 10x the bandwidth every 5 years.

http://www.realworldtech.com/includes/images/articles/Rambus-TBI-1.jpg
 
Back
Top