Why did PS2 use eDRAM? *spawn

How did eDRAM not work as intended for PS2?!
for several reasons :
- 4 mo was insufficient
- it contributed heavily into making developing games for ps2 a nightmare task for developers

instead of edram, at the same cost, if PS2 had say 16 mo or 32 mo of video RAM, it would have worked a lot better for ps2, making development easier, increasing the quantity and improving the quality of textures, bigger environments...etc

as for xbox360 :
- 10 mo was insufficient
- it made development more difficult, especially to target 720p with anti aliasing, it obliged developers to tile the image which not only made development harder but also affected negatively the performances of the xenos GPU, in other words a lot of the supposed benefits of edram were simply hindered by Tiling.

instead of edram, at the same cost, microsoft could have used more RAM, say 768 mo instead of 512 mo. This, coupled with xenos unified shaders superiority compared to RSX, would have made the 360 clearly superior in a lot of respects compared to ps3, and would have made multiplatform games far superior on xbox360 than ps3.

I believe sony and microsoft learned well their lessons and they wont bother repeat another edram MISADVENTURE for their nextgen consoles.
 
for several reasons :
- 4 mo was insufficient
- it contributed heavily into making developing games for ps2 a nightmare task for developers

instead of edram, at the same cost, if PS2 had say 16 mo or 32 mo of video RAM, it would have worked a lot better for ps2, making development easier, increasing the quantity and improving the quality of textures, bigger environments...etc
No it wouldn't. The choice of EE and GS made development hard. The eDRAM provided the bandwidth that allowed PS2 to render what it could. Replacing that eDRAM with more, slower RAM would have completely crippled the machine. They'd have needed a completely different GPU architecture.
 
No it wouldn't. The choice of EE and GS made development hard. The eDRAM provided the bandwidth that allowed PS2 to render what it could. Replacing that eDRAM with more, slower RAM would have completely crippled the machine. They'd have needed a completely different GPU architecture.

sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware. ;)
 
sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware. ;)

Tha'ts a quantity issue. Developers always want more. Would they rather do without it ? Absolutely not.

When the PS2 launched in early 2000, the state of the art Geforce 2 had 5.3GB/s bandwidth. The PS2 EDRAM offers an order of magnitude higher bandwidth.

Cheers
 
sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware. ;)
As Gubbi says, that was a limitation, but the whole PS2 architecture needed eDRAM. Sure, more eDRAM would ahve been better, but replacing it with slow RAM would have completely crippled the machine. eDRAM was essential for PS2 and was very successful AFAICS. PS2 wasn't easy to develop for, but it did an okay job and excelled in some areas. By comparion, XBox was bandwidth starved.
 
As Gubbi says, that was a limitation, but the whole PS2 architecture needed eDRAM. Sure, more eDRAM would ahve been better, but replacing it with slow RAM would have completely crippled the machine. eDRAM was essential for PS2 and was very successful AFAICS. PS2 wasn't easy to develop for, but it did an okay job and excelled in some areas. By comparion, XBox was bandwidth starved.

Tha'ts a quantity issue. Developers always want more. Would they rather do without it ? Absolutely not.

When the PS2 launched in early 2000, the state of the art Geforce 2 had 5.3GB/s bandwidth. The PS2 EDRAM offers an order of magnitude higher bandwidth.

if that was the case and edram for ps2 was a better choice than slower more quantity of vram, than I have 2 questions :

1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?

2- Why PC GPU manufacturers do not use edram for their GPUs ?
 
if that was the case and edram for ps2 was a better choice than slower more quantity of vram, than I have 2 questions :

1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?

2- Why PC GPU manufacturers do not use edram for their GPUs ?

The answer lies in your own question.
The PS3 is a different architecture. So are PC GPU's.
What Shifty is telling you is that, the edram was needed for the PS2's own architecture. The GS was not your typical GPU. For THAT particular design edram was a better choice. He is not saying that edram is better in every occasion.
Sony went for a more conventional GPU for the PS3. Edram might have been included if the PS3 was an evolution of the PS2's architecture. But it wasnt
 
PS2 was designed around a very different rendering approach to conventional GPUs, at a time when there were no standards and PS2's approach was a valid option with various pros/cons. Xbox and the typical GPUs work by doing a lot of work on each pixel unti it's complete in one pass. You'd apply several textures and shading calculations in one pass and create a final pixel value for output. PS2 is designed to render more like one effect at the time. You'd render the whole screen with each triangle's first texture. Then render the whole screen with a second texture and combine. Then render the whole screen with some lighting modifications. It's similar to deferred rendering combining lots of buffers, only it's not deferred rendering. It's the difference between slow-and-complex (standard GPU) and fast-but-simple (PS2). The different approaches have their advantages and disadvantages. Fast and simple only works with lots of bandwidth, to read and write lots and lots of pixels as you gradually build up the final image through lots of passes - hence the need for eDRAM to enable affordable massive bandwidth.

Current GPUs including PS3, working with the slow-and-complex approach, don't need as much bandwidth, instead using processing power to perform lots of calculations per pixel. They can be served reasonably well with moderate RAM speeds. Of course, they can benefit from massive bandwidth for rendering effects where that's useful, just as a PS2 could benefit from more RAM. You can always use more! But PC GPUs don't need eDRAM.

eDRAM is basically a cost choice. It'd be great to have the bandwidth of eDRAM with the capacity of normal RAM, but getting that performance from RAM is either stupidly expensive or technically impossible. Likewise, eDRAM made in those capacities (let's say 256 MB eDRAM for XB360) is just impossible. So a compromise is reached where eDRAM provides a fast working space (like CPU cache) at restricted capacity, with the contents being moved to/from RAM as you go. eDRAM is just another tier in the hierarchy of RAM speeds. Ideally we'd have processors working with L1 level cache speeds, but that's not possible. So we have lower tiers, each tier becoming slower but also more capacious. eDRAM sits between L1/L2 cache and RAM.

There's a whole other thread on whether there's a future for eDRAM if you want to look.
 
sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware. ;)
Please don't use "MO" in international fora; people being excessively French isn't tolerated outside the borders of France.

The correct term is "MB".

Seriously though, I'd like you to objectively verify your claim that "most developers" think the eDRAM was a failure. I contend that you simply can't do that.

Besides, it wasn't a failure, as without it the PS2 would have been vastly weaker in performance, and graphics would have suffered greatly as a result. Simply blaming its limited size isn't going to work, as there were techniques to get around that issue quite effectively.
 
The PS2 GPU was a fillrate monster, I had far more fill rate that you could effectively utilise for mesh rendering which meant that you bottlenecked on vertex processing.

The lack of VRAM but huge buckets of fillrate made stencil shadows attractive over shadow maps + we could have quite a lot of large particles.

I'd have traded fill for more VRAM on PS2 no questions asked.

Ideally perhaps if you were going to insist on embedded RAM 2mb of eDRAM (640x512 32 bit colour + 16 bit z = 1.9mb) + 4 or 8mb of VRAM would have made quite a difference.

The Xenos could have done with a little more embedded RAM it's true.

All of this is not a reason to say that embedded RAM is a bad idea, just that we need enough of it and that it shouldn't consitute all of VRAM either.
 
if that was the case and edram for ps2 was a better choice than slower more quantity of vram, than I have 2 questions :

1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?

2- Why PC GPU manufacturers do not use edram for their GPUs ?

PS2's GPU was designed at a point where programmable shading was still basically register combiners.
Rather than building a chain of combiners PS3 instead used a simple ROP with the intent of combining in the target buffer.
It gave PS2 a huge advantage for simple transparent overdraw.
If anything the GPU's weakness was the lack of a full set of ROP functions.
You could produce prettier pixels IMO on an Xbox, but you were always having to watch fill rate.

Using logic to reduce bandwidth (compressed Z/Color etc) is the way PC manufacturers eventually went, but they are playing in an environment where they can stick fast memory on 256bit and wider busses.

In retrospect I don't think combining in the destination buffer was very forwards looking, but for the time, it was a pretty good solution.

The complexity developers whine about had more to do with the VU's and managing DMA lists which had poor tools and the lack of hardware support for trivial things like near plane clipping.
 
The eDRAM on PS2 wasn't just a bandwidth/fillrate advantage but must have helped latency.. particularly, having the ROPs built into the RAM makes them much lower latency. This had to also facilitate the "fast but simple" mentality, requiring less latency hiding mechanisms in the rest of the pipeline.
 
as for xbox360 :
- 10 mo was insufficient
- it made development more difficult, especially to target 720p with anti aliasing, it obliged developers to tile the image which not only made development harder but also affected negatively the performances of the xenos GPU, in other words a lot of the supposed benefits of edram were simply hindered by Tiling.

instead of edram, at the same cost, microsoft could have used more RAM, say 768 mo instead of 512 mo. This, coupled with xenos unified shaders superiority compared to RSX, would have made the 360 clearly superior in a lot of respects compared to ps3, and would have made multiplatform games far superior on xbox360 than ps3.
I disagree. EDRAM is one of the reasons why Xbox 360 multiplatform games look better or at least equal to PS3 games. Without EDRAM the whole system would be very much bandwidth starved. Xbox 360 has an unified memory system after all (shared between CPU & GPU). PS3 in comparison has dedicated graphics memory for GPU (it doesn't have to share the bandwidth with CPU).

If you want further proof, look at the PC benchmarks that include systems with unified memory. Performance of Llano and Trinity APUs for example scale up nicely if you pair them with faster memory. This indicates that systems with unified memory are often bandwidth bound. Even a limited amount of EDRAM does help a lot if your system is bandwidth bound (front buffer / z-buffer operations cost a lot of BW, especially when blending and antialiasing are enabled).

Why not EDRAM for PCs? The reason is simple. PC graphics APIs are designed to hide hardware specifics, so it is hard to code around EDRAM size limitations. PCs have diplays with variable resolutions (ranging up to 2560x1600), and graphics cards must be able to render efficiently to all of the resolutions requested. Basically its not cost efficient to put enough EDRAM to the mainstream boards that resolutions up to 2560x1600 fit to the EDRAM (with up to 4x16 bit float colors, 4xMRT deferred rendering and 8xMSAA). PC requires hardware flexibility (while EDRAM works best in closed systems like consoles).

I am a Xbox 360 developer, and I wouldn't trade my 10 MB of EDRAM to 256 MB of extra (slow) memory. Tiling isn't a problem, it isn't used in most current generation (deferred rendering) engines anymore. Of course EDRAM requires a little bit more bookkeeping, but the performance gains are definitely worth it. And you have memexport if you want to directly write to random memory locations (outside the EDRAM), so it's a very flexible system indeed.
 
Let me explain why huge amounts of low bandwidth memory is not a good idea. Slow memory is pretty much unusable, simply because we cant access it :)

The GDDR3 memory subsystem in current generation consoles gives theoretical maximum of 10.8 GB/s read/write bandwidth (both directions). For a 60 fps game this is 0.18 GB per frame, or 184 MB, assuming of course that you are fully memory bandwidth bound at all times, and there's no cache trashing, etc happening. In practice some of that bandwidth gets wasted, so you might be able to access for example 100 MB per frame (if you try to access more, the frame rate will drop).

So with 10.8 GB/s theoretical bandwidth, you cannot access much more than 100 MB of memory per frame, and memory accesses do not change that much from frame to frame, as camera & object movement has to be relatively slow in order for animation to look smooth (esp. true at 60 fps). How much more memory you need than 100 MB then? It depends on how fast you can stream data from the hard drive, and how well you can predict the data you need in the future (latency is the most important thing here). 512 MB has proven to be enough for our technology, as we use virtual texturing. The only reason why we couldn't use 4k*4k textures on every single object was the downloadable package size (we do digitally distributed games), the 512 MB memory was never a bottleneck for us.

Of course there are games that have more random memory access patterns, and have to keep bigger partitions of game world in memory at once. However no matter what, these games cannot access more than ~100 MB of memory per frame. If you can predict correctly and hide latency well, you can keep most of your data in your HDD and stream it on demand. Needless to say, I am a fan of EDRAM and other fast memory techniques. I would always opt for small fast memory instead of large slow memory. Assuming of course we can stream from HDD or from flash memory (disc streaming is very much awkward, because of the high latency).
 
Sebbi: What about for say laptops and tablets which are generally designed around a fixed specification? 1920/1080 seems to be standardising in both formats so why not have specific laptop SKUs taking advantage of the technology? If you had say a CGPU then why not use a small quantity of ED-RAM off die for additional memory bandwidth?
 
Let me explain why huge amounts of low bandwidth memory is not a good idea. Slow memory is pretty much unusable, simply because we cant access it :)

The GDDR3 memory subsystem in current generation consoles gives theoretical maximum of 10.8 GB/s read/write bandwidth (both directions). For a 60 fps game this is 0.18 GB per frame, or 184 MB, assuming of course that you are fully memory bandwidth bound at all times, and there's no cache trashing, etc happening. In practice some of that bandwidth gets wasted, so you might be able to access for example 100 MB per frame (if you try to access more, the frame rate will drop).

So with 10.8 GB/s theoretical bandwidth, you cannot access much more than 100 MB of memory per frame, and memory accesses do not change that much from frame to frame, as camera & object movement has to be relatively slow in order for animation to look smooth (esp. true at 60 fps). How much more memory you need than 100 MB then? It depends on how fast you can stream data from the hard drive, and how well you can predict the data you need in the future (latency is the most important thing here). 512 MB has proven to be enough for our technology, as we use virtual texturing. The only reason why we couldn't use 4k*4k textures on every single object was the downloadable package size (we do digitally distributed games), the 512 MB memory was never a bottleneck for us.

Of course there are games that have more random memory access patterns, and have to keep bigger partitions of game world in memory at once. However no matter what, these games cannot access more than ~100 MB of memory per frame. If you can predict correctly and hide latency well, you can keep most of your data in your HDD and stream it on demand. Needless to say, I am a fan of EDRAM and other fast memory techniques. I would always opt for small fast memory instead of large slow memory. Assuming of course we can stream from HDD or from flash memory (disc streaming is very much awkward, because of the high latency).

Thanks for that post!

A questions though if I may, as you've stated that you would always choose smaller faster memory over big and slow is there any situation that would change your mind?

Is there any chance you can kind of estimate the perfect amount of bandwidth the consoles could have that would make them perfectly balanced?

I remember the launch of the AMD 5770 and many were complaining that it's lack of memory bandwidth compared to the AMD 4870 it was based on would kill its performance, AMD claimed the card was perfectly balanced and indeed running tests showed that it was

What we be looking at for the next generation consoles? 100GB/s minimum?
 
PS2 GS was finished and locked down before they start developing the Emotion Engine. That's why GS looked pretty ancient by the time PS2 came out.
 
PS2's edram not only store frame/z buffer but also texture-data for rendering primitive.and ps2 doens't support texture compress like DXT.
therefore, beside framebuffer, another role of PS2's edram is texture cache.but not like GC's 1T-SRAM and desktop GPUs,the cahce logic must be implemented by developer from scratch.it increased the difficulty of development obviously.
for X360,the texture data store in main mem.it may suffer from low mem-bandwidth or small texture cache under some condition,but for developer, it much easier to develop than PS2.
 
DoctorFouad said:
1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?
I'm pretty sure that before RSX happened, they did.

pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware
Having more is always better - but if you want to talk critical bottlenecks I'd start with 32MB of main memory. And a 300Mhz in-order CPU with no L2 cache.

GraphicsCodeMonkey said:
The PS2 GPU was a fillrate monster, I had far more fill rate that you could effectively utilise for mesh rendering which meant that you bottlenecked on vertex processing.
I felt that mantra originated from Sony's over-focus on polygon pushing in early PS2 days, from marketing to their communication to developers.
Ultimately the fillrate was never too much when it came to making pixels look prettier - I felt things were pretty well balanced in the end. It was the CPU core that came out as the weakest link most of the time.
 
Back
Top