How did eDRAM not work as intended for PS2?!On edram issue, I think neither microsoft nor sony have eny incentive to go that way again, it didnt work as intended for ps2 neither for xbox360,
How did eDRAM not work as intended for PS2?!On edram issue, I think neither microsoft nor sony have eny incentive to go that way again, it didnt work as intended for ps2 neither for xbox360,
for several reasons :How did eDRAM not work as intended for PS2?!
No it wouldn't. The choice of EE and GS made development hard. The eDRAM provided the bandwidth that allowed PS2 to render what it could. Replacing that eDRAM with more, slower RAM would have completely crippled the machine. They'd have needed a completely different GPU architecture.for several reasons :
- 4 mo was insufficient
- it contributed heavily into making developing games for ps2 a nightmare task for developers
instead of edram, at the same cost, if PS2 had say 16 mo or 32 mo of video RAM, it would have worked a lot better for ps2, making development easier, increasing the quantity and improving the quality of textures, bigger environments...etc
No it wouldn't. The choice of EE and GS made development hard. The eDRAM provided the bandwidth that allowed PS2 to render what it could. Replacing that eDRAM with more, slower RAM would have completely crippled the machine. They'd have needed a completely different GPU architecture.
sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware.
As Gubbi says, that was a limitation, but the whole PS2 architecture needed eDRAM. Sure, more eDRAM would ahve been better, but replacing it with slow RAM would have completely crippled the machine. eDRAM was essential for PS2 and was very successful AFAICS. PS2 wasn't easy to develop for, but it did an okay job and excelled in some areas. By comparion, XBox was bandwidth starved.sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware.
As Gubbi says, that was a limitation, but the whole PS2 architecture needed eDRAM. Sure, more eDRAM would ahve been better, but replacing it with slow RAM would have completely crippled the machine. eDRAM was essential for PS2 and was very successful AFAICS. PS2 wasn't easy to develop for, but it did an okay job and excelled in some areas. By comparion, XBox was bandwidth starved.
Tha'ts a quantity issue. Developers always want more. Would they rather do without it ? Absolutely not.
When the PS2 launched in early 2000, the state of the art Geforce 2 had 5.3GB/s bandwidth. The PS2 EDRAM offers an order of magnitude higher bandwidth.
if that was the case and edram for ps2 was a better choice than slower more quantity of vram, than I have 2 questions :
1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?
2- Why PC GPU manufacturers do not use edram for their GPUs ?
Please don't use "MO" in international fora; people being excessively French isn't tolerated outside the borders of France.sorry but in interviews most ps2 developers disagree with you, pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware.
if that was the case and edram for ps2 was a better choice than slower more quantity of vram, than I have 2 questions :
1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?
2- Why PC GPU manufacturers do not use edram for their GPUs ?
I disagree. EDRAM is one of the reasons why Xbox 360 multiplatform games look better or at least equal to PS3 games. Without EDRAM the whole system would be very much bandwidth starved. Xbox 360 has an unified memory system after all (shared between CPU & GPU). PS3 in comparison has dedicated graphics memory for GPU (it doesn't have to share the bandwidth with CPU).as for xbox360 :
- 10 mo was insufficient
- it made development more difficult, especially to target 720p with anti aliasing, it obliged developers to tile the image which not only made development harder but also affected negatively the performances of the xenos GPU, in other words a lot of the supposed benefits of edram were simply hindered by Tiling.
instead of edram, at the same cost, microsoft could have used more RAM, say 768 mo instead of 512 mo. This, coupled with xenos unified shaders superiority compared to RSX, would have made the 360 clearly superior in a lot of respects compared to ps3, and would have made multiplatform games far superior on xbox360 than ps3.
Let me explain why huge amounts of low bandwidth memory is not a good idea. Slow memory is pretty much unusable, simply because we cant access it
The GDDR3 memory subsystem in current generation consoles gives theoretical maximum of 10.8 GB/s read/write bandwidth (both directions). For a 60 fps game this is 0.18 GB per frame, or 184 MB, assuming of course that you are fully memory bandwidth bound at all times, and there's no cache trashing, etc happening. In practice some of that bandwidth gets wasted, so you might be able to access for example 100 MB per frame (if you try to access more, the frame rate will drop).
So with 10.8 GB/s theoretical bandwidth, you cannot access much more than 100 MB of memory per frame, and memory accesses do not change that much from frame to frame, as camera & object movement has to be relatively slow in order for animation to look smooth (esp. true at 60 fps). How much more memory you need than 100 MB then? It depends on how fast you can stream data from the hard drive, and how well you can predict the data you need in the future (latency is the most important thing here). 512 MB has proven to be enough for our technology, as we use virtual texturing. The only reason why we couldn't use 4k*4k textures on every single object was the downloadable package size (we do digitally distributed games), the 512 MB memory was never a bottleneck for us.
Of course there are games that have more random memory access patterns, and have to keep bigger partitions of game world in memory at once. However no matter what, these games cannot access more than ~100 MB of memory per frame. If you can predict correctly and hide latency well, you can keep most of your data in your HDD and stream it on demand. Needless to say, I am a fan of EDRAM and other fast memory techniques. I would always opt for small fast memory instead of large slow memory. Assuming of course we can stream from HDD or from flash memory (disc streaming is very much awkward, because of the high latency).
I'm pretty sure that before RSX happened, they did.DoctorFouad said:1- if edram for ps2 was perceived by ps2 developers as a successful choice, than why sony didnt use edram for its ps3 architecture ?
Having more is always better - but if you want to talk critical bottlenecks I'd start with 32MB of main memory. And a 300Mhz in-order CPU with no L2 cache.pointing the insufficient 4mo vram of ps2 as the major bottleneck of the hardware
I felt that mantra originated from Sony's over-focus on polygon pushing in early PS2 days, from marketing to their communication to developers.GraphicsCodeMonkey said:The PS2 GPU was a fillrate monster, I had far more fill rate that you could effectively utilise for mesh rendering which meant that you bottlenecked on vertex processing.