The pros and cons of eDRAM/ESRAM in next-gen

Forgive me if I don't put any weight into a RAM designer's projections about the needs of game rendering rather than an educated guess based off of current rendering tech requirements that a game developer would have far more knowledge about. My question is asking for what specifically is consuming bandwidth and what those figures are, not just throwing numbers to try to fit some imaginary curve in order to push some agenda for selling a memory technology.
 
Humus said:
I have no experience with PS2, but for Xbox360 eDRAM has been a disaster.
PS2 eDram had a brilliant marketing execution and it allowed pioneering stuff in commercial games that took 5 years to really become usable on PC, it was also far more flexible then other eDram designs used in consoles (except PSP).
On the flipside it led to manufacturing disasters in first year of the console(and IIRC cut the original VRam into half), as well as some miserable Rasterizer compromises that stigmatized the console(mipmap selector, primarily).

In hindsight it's of course easy to argue you could use the same amount of resources to come up with something better (and still as unique). But it'd be damn hard to compete with the marketing impact of design they used at the time.
 
I'm with Humus on this one. If we're talking about next-gen then deferred rendering should be an assumption, and not just a possibility. Hardware should be designed with that in mind, and anything that could potentially hamper the performance of laying out some fat G-Buffers shouldn't even be considered. As far as overdraw/blending goes I would much rather have more shader power and flexibility available so that I can implement better ways to handle techniques that might require a lot of overdraw with a naive implementation.

Also, anything higher than 1920x1080 is beyond silly. There's lots of effective things we can do to improve image quality and aliasing, and none of them involve blindly pushing more pixels.
 
Also, anything higher than 1920x1080 is beyond silly. There's lots of effective things we can do to improve image quality and aliasing, and none of them involve blindly pushing more pixels.

Everyone likes super sampling ;p
 
I'm with Humus on this one. If we're talking about next-gen then deferred rendering should be an assumption, and not just a possibility. Hardware should be designed with that in mind, and anything that could potentially hamper the performance of laying out some fat G-Buffers shouldn't even be considered.
Except we can always go TBDR. I'm really starting to think if anyone goes with eDRAM, they'll be going with TBDR too. It'd make a small amount of eDRAM very effective and solve the BW issues of main RAM, plus latency issues that AlStrong is concerned with.
 
For a TBDR, picking the optimal tile size is a matter of LOTS of simulation and careful tradeoffs. If your tile size is O(1000) pixels, then you don't need edram. If your tile size is O(10^5) pixels, then you probably need do multipass geometry and frustum cull at draw call granularity, and then you don't need a TBDR.
 
Forgive me if I don't put any weight into a RAM designer's projections about the needs of game rendering rather than an educated guess based off of current rendering tech requirements that a game developer would have far more knowledge about. My question is asking for what specifically is consuming bandwidth and what those figures are, not just throwing numbers to try to fit some imaginary curve in order to push some agenda for selling a memory technology.



Ok...lets see what John Carmack wants.

“One of the most important things I would say is a unified virtual 64-bit address space, across both the GPU and the CPU. Not a partition space, like the PS3. Also, a full 64-bit space with virtualization on the hardware units – that would be a large improvement. There aren’t any twitchy graphics features that I really want; we want lots of bandwidth, and lots of cores. There’s going to be a heterogeneous environment here, and it’s pretty obvious at this point that we will have some form of CPU cores and GPU cores. We were thinking that it might be like a pure play Intel Larabee or something along that line, which would be interesting, but it seems clear at this point that we will have a combination of general purpose cores and GPU-oriented cores, which are getting flexible enough that you can do most of the things that you would do on a CPU.” – id Software co-founder John Carmack on next gen consoles.
 
For a TBDR, picking the optimal tile size is a matter of LOTS of simulation and careful tradeoffs. If your tile size is O(1000) pixels, then you don't need edram. If your tile size is O(10^5) pixels, then you probably need do multipass geometry and frustum cull at draw call granularity, and then you don't need a TBDR.
I agree with that, and would expect a TBDR to use SRAM rather than eDRAM. But if we do see a console with eDRAM, I expect it to be a TBDR, maybe with enormous tiles. ;) (which isn't really TBDR, I know). eDRAM working with full buffers will just need to be too large, or be gimped.
 
I think there is a need for a big edram, which is exposed very flexibly. It can provide the bandwidth and size we need. Interposers seem like they won't be ready in time.
 
Can the resolution be scaled up from 1080p to 2569x1600?..like console do now from 720p-1080p?

If they do standardise it to 1080p, then it would solve alot of issues. marketing wise though, they are going to have to differntiate the new consoles away from the current 'hd' ones...which according to the consumer have been doing '1080p' since 2005...why should they upgrade for the same thing?

By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram?
7 years ago 360 had 10mb after all...
 
Well, yes. We've already gone through marketing with HD and also motion control gaming. I wouldn't be surprised if the emphasis is on features, apps and the sort instead, so focusing solely on 1080p would be a huge mistake. Again, you can always do prettier pixels and more advanced rendering features, and that works against pushing raw numbers of pixels. Remember, at the end of the day, we're going to be saddled with a silicon budget that has limits in physical reality so all this wishing for 1080p+ mandates and expectations is just going to hinder what level of graphics we see. Devs could have targeted 1080p this gen if they wanted to, but then we'd have PS2 graphics or worse (just look at how the HD ports of PS2 games fairs at 1080p on PS3). That's hardly marketable for a generational shift.

The mass market isn't going to give two hoots what the real rendering resolution is. And this is bloody well off topic enough, so let's get back to what the topic is about: edram technical pros/cons.
 
Can the resolution be scaled up from 1080p to 2569x1600?..like console do now from 720p-1080p?
You can upscale to any resolution. Any higher resolution TVs will have upscaling anyway. I can't see any console supporting higher than 1080p output out of the box, except maybe for 4k movie playback as that's a simple software feature.

If they do standardise it to 1080p, then it would solve alot of issues. marketing wise though, they are going to have to differntiate the new consoles away from the current 'hd' ones.
They'll produce much better pictures! We've only had one generation of hardware selling on a resolution. Other consoles have just been a progression without making a song and dance about what resolution they are rendering. And notably next gen will actually be rendered at clean 1080p, instead of murky, sometimes upscaled (especially some buffers like reflection buffers) 720p. There's plenty of room for visual improvement without needing to chase a niche resolution.

By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram?
7 years ago 360 had 10mb after all...
eDRAM doesn't appear to scale according to Moore's law, due to costs I think. The largest amount of eDRAM yet seen is 32 MBs on Power7. PS2 had 4MBs. 6 years later XB360 had 10 MBs, not enough to fit a full framebuffer. eDRAM's an expensive option to provide massive bandwidth. If it isn't a large enough amount of RAM to be useful, that BW gain is of little use, and if you don't need all that BW then it's not much use. If next gen could get by on, say, 150 GB/s, and that was possible on a conventional bus at an affordable cost, eDRAM has no place. If the cost of RAM is going to cap it at 50 GBs, say, and you need more, then eDRAM offers a solution but we face the issue of how much, If we need tens of MBs, it'll be too costly. And if we're needing BW and RAM capacity, perhaps it's time to switch to TBDR.

If we compare PS2 to XB. PS2 needed the BW as it rendered 'shader effects' through multipass rendering. XB could get away with far less BW by computing effects in shaders. The same 4GBs eDRAM on XB would have whacked the price up considerably to little gain except the beauty of PS2's particles. The expense of XB was due principally to being rushed and having poor contracts where the cost savings of process shrinks weren't passed on to MS IIRC.
 
Ok then it seems like edram has always been a ballancing act between memory controller width and the edram.
If you say its as expensive as that..(360 only got a 6mb jump over 6 years from ps2)..it seems like you dont get the cost reduction benefits from node shrinkage/re packaging new iterations of the console...as you do with usual components.

This is the same arguement as with controller width, but seeing as pc graphics cards are continually using wide controllers and go no where near edram perhaps that gives us the answer.
It seems to me that edram was beneficial 6 years ago because wide-mem controllers where harder to come by and more expensive, the hd2970 xtx jumped up to a rather insane 512bit then we went back down and standardised at 256-384bit.

If the only reason that we are not considering wide pin is because cost reduction in future it seems edram is many times more expensive and just as hard to shrink.

So im going with a minimum 256bit bus 4gb gddr5 unified..not split pool.
My joker card is gonna be 3-4gb xdr2...with maybe a large 32mb stash of L3 on the cpu.
 
eDRAM doesn't appear to scale according to Moore's law, due to costs I think. The largest amount of eDRAM yet seen is 32 MBs on Power7. PS2 had 4MBs. 6 years later XB360 had 10 MBs, not enough to fit a full framebuffer. eDRAM's an expensive option to provide massive bandwidth. If it isn't a large enough amount of RAM to be useful, that BW gain is of little use, and if you don't need all that BW then it's not much use. If next gen could get by on, say, 150 GB/s, and that was possible on a conventional bus at an affordable cost, eDRAM has no place. If the cost of RAM is going to cap it at 50 GBs, say, and you need more, then eDRAM offers a solution but we face the issue of how much, If we need tens of MBs, it'll be too costly. And if we're needing BW and RAM capacity, perhaps it's time to switch to TBDR.

eDRAM scales according to Moore's law. Power 7 has edram on a logic process/die. Xbox has edram on DRAMish process. Not comaparable.
 
eDRAM scales according to Moore's law.
Yeah. I meant it in the same way French Toast seemed to be using it to explain why 7 years from XB360 should give us 150 MBs eDRAM. Obviously eDRAM as a tech based on transistor will still see exponential growth as long as chips see a doubling of transistors every two years, but its actual application doesn't follow suit in the same way other uses of transistors - RAM amount and execution units - do. Otherwise XB360 would have been on more like 32 MBs eDRAM, and we'd be contemplating 256-512 MB's eDRAM in PS4 following on from 4 MBs in PS2. :oops:

I don't know the ins and outs regards eDRAM tech and progression though. AFAICT it's mostly due to being far less dense than normal DRAM that it's not popular, as you can fit more other stuff into the same transistor count. Regarding my current view of eDRAM in a console, the main advantages of eDRAM BW can be negated by using alternative rendering techniques that use those transistor more efficiently as logic, and just work smarter.
 
nextgen xdr..

You really ought to have more substance to your post than this. This means absolutely nothing in the context of the thread. For starters, you fail to consider the economical feasibility of said technology or even the implications on the memory controller; everyone's GPUs are built with GDDR5 in mind these days, so they'd have to do a lot more work to even consider XDR2 as a replacement. XDR2 is not some free ride - the I/O can take up a considerable amount of physical space. Hell, XDR2 isn't even being sampled for a real-world product. Do you actually have anything to add to the discussion or are we supposed to read your mind? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.
 
By the way everyone is talking, edram seems extremely expensive, what about 128mb of edram? 7 years ago 360 had 10mb after all...
It's about cost, which does not scale linearly with size. In simple words: every wafer has spots which are "broken". If your chip die covers that spot, it's most likely broken as well. The larger the die, the more probable it is that your die will be broken. Less working chips per wafer (lower the yield), the higher the cost of production. 12x the size in MB is pretty much 12x the size. If you want to use smaller node (smaller transistors), you'll end up with technology which is more difficult to control and yield will be even smaller. You can't go up the node, because your die will be so large, clock will be "lagging" in certain points and you'll have problems synchronizing your chip. Also it will produce more heat and consume more power. 128MB would be prohibitively expensive and probably close to impossible to produce at this point in time.

At least that's how I understand it. ;)
 
Last edited by a moderator:
You really ought to have more substance to your post than this. This means absolutely nothing in the context of the thread. For starters, you fail to consider the economical feasibility of said technology or even the implications on the memory controller; everyone's GPUs are built with GDDR5 in mind these days, so they'd have to do a lot more work to even consider XDR2 as a replacement. XDR2 is not some free ride - the I/O can take up a considerable amount of physical space. Hell, XDR2 isn't even being sampled for a real-world product. Do you actually have anything to add to the discussion or are we supposed to read your mind? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.

Interface padding for XDR memory is actually faily simple, compared to any DDR-X standard, and takes much less die area. Just look at the die shots of the original Cell/B.E. and PowerXCell.
 
If nextgen xdr delivers 100s of GBs(300-500GBs), it might not be necessary.

? The thread is trying to gauge advocating eDRAM over actual real-world alternatives, not mythical technology.
Mythical? XDR low latency was chosen for cell's performance, iirc. Backwards compatibility of a next gen unified memory would demand similar low latency memory, how does gddr5 compare latency wise.

I would assume rambus has at the least tried to address issues that might hamper bringing such product into the market place. Unless they aren't serious about such being a viable future technology...

IF we want to talk mythical, we'd be talking optical interconnects(which I expect for 2020s consoles.).
 
Last edited by a moderator:
This is the same arguement as with controller width, but seeing as pc graphics cards are continually using wide controllers and go no where near edram perhaps that gives us the answer.

PCs have to support multiple huge buffers simultaneously without tiling, and mainstream graphics are provided by integrated graphics with no dedicated bus at all (never mind 256-bit). Edram isn't an option for PCs; it wasn't in 2005 when the 360 launched or in 2000 when the PS2 launched either.

If the only reason that we are not considering wide pin is because cost reduction in future it seems edram is many times more expensive and just as hard to shrink.

How do you work that out?
 
Back
Top