PS3 XDR bandwidth?

patroclus02

Newcomer
Why is XDR so well considered by PS3 enthusiast??
It is clocked at 3.2GHz, but I suppose it is 16 bits wide, so real bandwidth per chip would be 6.4Gb/s. As Sony claims a system RAM bandwith of 25,6Gb/s, I suppose 4 chips will be used ??

Ok, GDDR3 clocked at much lower 700MHz, is 64 bits wide, and so, delivering 11.2 Gb/s bandwidth per channel. Two channels will give us 22,4Gb/s.

Both memories are very close together. Specially, considering that max theorical rate will not be reached in most situations.
 
Why is XDR so well considered by PS3 enthusiast??
It is clocked at 3.2GHz, but I suppose it is 16 bits wide, so real bandwidth per chip would be 6.4Gb/s. As Sony claims a system RAM bandwith of 25,6Gb/s, I suppose 4 chips will be used ??

It transfers 8 bits per clock. So at 3.2Ghz, it transfers 8x3.2Ghz = 25,6Gb/s.

Ok, GDDR3 clocked at much lower 700MHz, is 64 bits wide, and so, delivering 11.2 Gb/s bandwidth per channel. Two channels will give us 22,4Gb/s.

Both memories are very close together. Specially, considering that max theorical rate will not be reached in most situations.

I think the most important advantage of XDR in the PS3 is the FlexIO system it is a part of, but I'm not an expert on this. I would guess though that FlexIO and XDR together allow for very efficient data transfers between the SPEs, PPE and other components, and this is the main reason for XDR being on the system.
 
FlexIO is only for external communications, IE, between Cell and RSX, and Cell and southbridge chip. Internally, Cell uses its proprietary (and vastly faster) EIB to transfer data between XDR and FlexIO controllers, CPU core and SPUs.

I would think Sony went with XDR for the same reason it picked DRDRAM last time for PS2, because it offers the best performance with the lowest pin count, and has a general high 'wow' factor. Laying out XDR and DRDRAM memory traces on a PCB is a cinch compared to the convoluted spirals needed for standard DRAM types... I'd think Sony is just making things easy for itself, and since PS2 must have been the biggest income by far for Rambus I assume they got a good deal on the XDR license too.
 
I've consistently heard latency is much improved on XDR, but I don't remember any figures being provided to show the difference.
 
It's is for the low latency that it was chosen. Rougly 1/3 of the latency of DDR and 1/5 of DDR2.

Picture two internet connections, both can download files at roughly the same speed. But one lets you play online games with 40ms. And the other lets you play with a ping of about 120-200.

PS3 will be the LPB.
 
Makes you wonder why neither ATI nor Nvidia chose XDR instead of GDDR4 for their next generation products...:???:
 
GDD4 has lower latency then XDR i believe

But GDDR4 has much higher latency than GDDR3 too.
And GPU's crave bandwidth, latency is not as concerning as it is with CPU's.

To me, the most likely reason to hold off XDR in graphics cards is the royalty revenue to be payed to RAMBUS.
 
Last edited by a moderator:
Makes you wonder why neither ATI nor Nvidia chose XDR instead of GDDR4 for their next generation products...:???:

I'd say it might have something to do with the fact that GDDR3/4 are probably cheaper to get ahold of than XDR1/2 and ATI/NV already have memory controllers designed with GDDR in mind. Even with the benefits of XDR, I'm not sure they are compelling enough to switch things around at this point -- XDR2 may have a chance, but it's looking pretty small even if it could provide improvements in the end.

@Darkon, how is that possible? GDDR4 has seemingly atrocious latency -- if XDR has lower latency than DDR/DDR2 (and GDDR4 is quite a bit worse than those, from what I've gathered), then something doesn't add up. Who is wrong?
 
I'd say it might have something to do with the fact that GDDR3/4 are probably cheaper to get ahold of than XDR1/2 and ATI/NV already have memory controllers designed with GDDR in mind. Even with the benefits of XDR, I'm not sure they are compelling enough to switch things around at this point -- XDR2 may have a chance, but it's looking pretty small even if it could provide improvements in the end.

@Darkon, how is that possible? GDDR4 has seemingly atrocious latency -- if XDR has lower latency than DDR/DDR2 (and GDDR4 is quite a bit worse than those, from what I've gathered), then something doesn't add up. Who is wrong?

yeah i'm wrong

the lowest latency of gddr 4 is 1,6 ns from what i gather

and xdr dram at 1.25 ns
 
yeah i'm wrong

the lowest latency of gddr 4 is 1,6 ns from what i gather

and xdr dram at 1.25 ns

Outside of that (which is more inconsequential and more related to Mhz speeds), there are cycle latencies on read/writes, refreshes and all that jazz -- just looking at the speed rating on the chips won't tell you much... different architectures will have vastly different latencies (a 1.6ns GDDR3 chip and a 1.6ns GDDR4 chip are going to be quite different, as far as I know).

I'm wondering how XDR actually compares to other stuff out there, I've heard some people say it has crappy latency and others say it has really low latency... I dunno.
 
the lowest latency of gddr 4 is 1,6 ns from what i gather

and xdr dram at 1.25 ns
That's not latency I'm willing to wager, that looks like cycle period. There isn't even a commodity SRAM memory device to be found with a latency of 1.25ns...
 
I didn't think so, because RIMM Rambus memory had higher latency than DDR RAM.
That was largely due to the RIMM memory layout, might be added. Since the memory bus was serial, with as many as 32 memory devices sitting one after the other like a string of pearls on the bus, it meant the bus was very long (two double-sided RIMMs), and signals took a long time relatively speaking to make it fromt he memory controller, to the memory device, and then back again.

PS2 never used RIMM-style DRDRAM, it had two memory channels with just one memory device per channel.
 
Why is XDR so well considered by PS3 enthusiast??
It is clocked at 3.2GHz, but I suppose it is 16 bits wide, so real bandwidth per chip would be 6.4Gb/s. As Sony claims a system RAM bandwith of 25,6Gb/s, I suppose 4 chips will be used ??

Ok, GDDR3 clocked at much lower 700MHz, is 64 bits wide, and so, delivering 11.2 Gb/s bandwidth per channel. Two channels will give us 22,4Gb/s.

Both memories are very close together. Specially, considering that max theorical rate will not be reached in most situations.

I think you're right with 4 times 16 bit interface which makes the bus 64 bit total. But to reach comparable speed GDDR3 needs 128 bits, looks like a big advantage in this regard for XDR to me.
 
That was largely due to the RIMM memory layout, might be added. Since the memory bus was serial, with as many as 32 memory devices sitting one after the other like a string of pearls on the bus, it meant the bus was very long (two double-sided RIMMs), and signals took a long time relatively speaking to make it fromt he memory controller, to the memory device, and then back again.

PS2 never used RIMM-style DRDRAM, it had two memory channels with just one memory device per channel.

You know the funny thing ? That set-up (RDRAM) was criticized a lot, but in some ways it is not like FB-DIMM technology is that MUCH different layout-wise. I know that when I first saw early articles talking about FB-DIMM's and the diagrams they presented Direct Rambus DRAM and related RIMM's came to mind almost instantly. There are key differences, but while XDR moved away from a completely serialized data bus approach like they had with RDRAM and Direct RDRAM, FB-DIMM's seem to be a more advanced take (AMB chip on each DIMM) on the serial shared data bus idea.
 
Well, there are similarities between RIMM and FB-DIMM, sure. But while a RIMM is sort of like that string of pearls like I said, the FB-DIMM is more like a row of trees sort of, with the buffering chip sitting serially on the memory bus, and the discrete memory chips on each DIMM being the 'branches' so to speak. Or something like that anyway. :D

The topology isn't really all that similar, but they do share traits, as already mentioned...

FB-DIMM by the way probably isn't much in the way of a performance contender by the way from what I understand, and I read somewhere that power consumption is TERRIBLE. :p Seems a full bank of these DIMMs need active cooling to not burn up (literally!)... Not that RIMMs were all that cool to the touch either I might add. Yow! :D

And to think how far we've come from the C64's bunch of SRAM chips running at 1MHz, chained more or less directly to the CPU's data and address buses from what I understand (no memory controller for DRAM in that old breadbox!), golly. It's only been around 22 years, my how time flies. It doesn't FEEL that long ago! When I look at computer hardware from that era tho, my skin simply crawls... How could we LIVE with primitive shit like that? :)
 
I think it should be noted that the GDDR-3 interface is 1.4 GHz *effective* clock in the same way that XDR is 3.2 GHz *effective* clock. GDDR-3 transfers 4 bits per clock relative to the DRAM, by using a 4-bit fetch width and a DDR (2bits per clock) bus that is twice as fast as the DRAM clock speed. Meaning a 1.4 GTransfers/sec rate = 700 MHz bus clock (DDR) = 350 MHz DRAM clock. 1.4 GTransfers per second * 128 bits wide bus = 22.4 GB/sec.

In XDR's case, it transfers 8 bits per cycle relative to the DRAM. This is done in a similar way, except the otherwise DDR bus is 4x as fast as the DRAM clock instead of just twice as fast (and of course, the fetch width is 1 byte). So 3.2 GTransfers per second = 1.6 GHz bus clock = 400 MHz DRAM clock. 3.2 GTransfers/sec * 64 bits wide bus = 25.6 GB/sec.

That's not latency I'm willing to wager, that looks like cycle period. There isn't even a commodity SRAM memory device to be found with a latency of 1.25ns...
Basically. I think they list it on the site as "request latency", which more or less means that you only wait one DRAM cycle to make a new request (implying that it's fully pipelined).
 
Back
Top