Why's XDRAM so slow?

Shifty Geezer

uber-Troll!
Moderator
Legend
As I understand it, there's 256 Mb XDRAM at 3.2 GHz, and 256 MB of GGDR3 at 700 MHz on PS3. With XDRAM being 4x the clock speed, why isn't it 4x the bandwidth? :?
 
XDR controllers are 32-bits wide each, and CELL has two of them. Each 512 Mbit DRAM is 16x (16-bits wide), so there's a total of four for 256 MB.

64 bits * 3.2 GT/s = 25.6 GB/s. Also, it does suggest that the DRAM clock for these chips is at 400 MHz. Perfectly normal for soldered-in DRAMs.
 
Why did Sony go with XDRAM then if it's no faster than DDR3? Forward thinking? Hoped for a higher clockspeed DDR3 couldn't match?
 
You know, i was wondering the same thing, they have 2 pools, one XDR and one GDDR, and they're pretty much equal in speed. So why going for a more expensive XDR setup when they could just have put another pool of GDDR?

I'm sure i'm missing something.
 
Shifty Geezer said:
Why did Sony go with XDRAM then if it's no faster than DDR3? Forward thinking? Hoped for a higher clockspeed DDR3 couldn't match?

No. Sony went with this ram because it sounded cooler. I mean, XDR, FlexIO....ooooohh :LOL:
 
Shifty Geezer said:
Why did Sony go with XDRAM then if it's no faster than DDR3? Forward thinking? Hoped for a higher clockspeed DDR3 couldn't match?

I think that they where planing UMA architecture from the beginning, when they had to use dedicated memory for the GPU they cut two of the XDRAM channels.
 
The latency is lower because of the higher clockrate. This is required for the SPEs.
 
XDR was supposed to help make the system cheaper since it requires way fewer pins/chips/traces than other memory technologies for a given throughput. The startup costs are high, but when you expect to sell 100 million units, the amortized cost would probably come out lower. But much of those system savings are gone now that Sony had to include GDDR3 for the Nvidia GPU. Mating Nvidia to XDR, a memory technology they have no experience with, on such a tight schedule was infeasible.

XDR can be much faster than GDDR3 for the same pin count--too bad Sony didn't go with more XDR channels per cell. The only way to increase the number of channels is by increasing the number of cells. Placing two 4-SPE cells in PS3 would give the system 51GB/s of XDR bandwidth! I think that would make for a pretty killer system, even though the GFLOPS would only be 30-ish higher.
 
1. I do not expect XDR to get cheaper in the long run. It is a superior technology and Rambus is licensing it as such. Not to mention GDDR3 is making its way down the line quickly.

2. I think the memory layout in the PS3 is the result of the "last minute" (i.e. not original plan) inclusion of the RSX.

CELL was designed around XDR
G70 was designed around GDDR3

There just was not enough time to impliment a new design for the GPU to use XDR in my opinion. Therefore the ~50GB/s UMA pool of XDR because 2 pools, one of XDR and one of GDDR3.

While I have quesitons about the design effeciency there is no denying there is a TON of bandwidth in the PS3 design overall. Think of it this way:

The PS3 has almost 2x the bandwidth of the Xbox 360. And the XDR is part of it... so 50GB for one pool or 25GB for 2 separate pools is ballpark. I would have preferred 50GB/s of XDR in a UMA, but the PS3 has the next best thing.

Well, next best thing not called eDRAM ;)
 
Isn't XDR octal data rate? So that would mean a 3.2Ghz clockspeed translates into an actual clock 400Mhz. Likewise with GDDR3, it has 4 internal busses to the ram chips, meaning 1.4Ghz is actually 350Mhz.
 
Acert93 said:
1. I do not expect XDR to get cheaper in the long run. It is a superior technology and Rambus is licensing it as such. Not to mention GDDR3 is making its way down the line quickly.

2. I think the memory layout in the PS3 is the result of the "last minute" (i.e. not original plan) inclusion of the RSX.

CELL was designed around XDR
G70 was designed around GDDR3

There just was not enough time to impliment a new design for the GPU to use XDR in my opinion. Therefore the ~50GB/s UMA pool of XDR because 2 pools, one of XDR and one of GDDR3.


They have about a year left... is that not long enough to adapt the GPU for XDR use? :?
 
The non-uniform memory system seems to imply that the switch to nivida happend quite late. This will probably have implications on their abilities to reduce manufacturing costs on further revisions
 
The CPU needs low latency to the memory, but the GPU doesn't care, its only real wish is high bandwidth. Considering XDR currently is more expensive on a per-GB/s basis, this will result in higher performance than a "full XDR" solution.
CELL is completely made with XDR in mind. You can't just put GDDR3 in there and hope for the best, even if you revamped all the memory controllers. This is just a design choice, and not a "lack of time" thing.
 
Alstrong said:
They have about a year left... is that not long enough to adapt the GPU for XDR use? :?

If RSX ("G70") is an off the shelf GPU as it appears (and many of us thought based on the vague statements, "The technology has been in development for 18mo" back this winter... they never specifically said the RSX was specifically being developed for PS3 for 18mo) it would appear nVidia had limited time.

Since the chip is planned to tape out this fall, that does not give them a ton of time. Even if we are to expect it this summer as rumoured (on the PC) that does not give them much time to adapt the interface from GDDR3 to XDR. They have much less than a year to get this silicone into developers hands... everything points to a fairly last minute deal it seems.

Cost could be an issue also as Uttar pointed out. PS3 is going to be the most expensive machine next gen, if they can get similar performance with cheaper parts then it probably is a good move.
 
So the consesus is Sony missed the boat this time around? They were aiming for for something they couldn't manage, so went for a best compromise, taking nVidia's expertise at the cost of needing a different memory type, and going with XDR despite its cost for the low latency?

Ideally, they would have liked a unified 512 mb XDR?
 
They have about a year left... is that not long enough to adapt the GPU for XDR use?
A year ? if they want to ship the system in spring they are going to need a few million systems which means they need to build up stock. Figure like 4-5 months for that . Then don't forget that while developers don't need final clock speeds they do need final specs and hwo things wil react . They can't just finish the rsx with xdr memory controllers 2 months before launch and expect to be fine .

I wouldn't be suprised if ms starts making systems in mass by july and they may already be stock piling some of the parts needed . Ram / hardrives and building the shells and what not .
 
Back
Top