RAM latency in PS3, Xbox 360

Glonk

Regular
We all know the bandwidth differences by now.

But what I'm interested in is the latencies. From what's being said, XDR has really low latencies, while GDDR3 (as it's based on DDR2) has much higher latencies.

How does this play into Xenon's performance, given the rather small L2 1MB shared cache for all 3 cores?

Also, does anyone have any actual latency information for XDR vs GDDR3? From what I've read, it's ~3.5ns for XDR and 10ns for GDDR3...anyone know?

Also, what exactly is the difference between DDR2 and GDDR3?
 
I've asked this question before. Seems no-one has any answers regards DDR and XDR latencies, other than 'XDR's faster'.
 
Well, I couldn't really find any comparable information... but heres some info straight from the creators. Not much good, but interesting regardless.

XDR

GDDR3
 
XBox 360 CPU is ~500 cycles away from main memory. I would assume the Xenos is about the same or less.

For the PS3 I have no idea. Since we typically give bandwidth credit to RSX via FlexI/O to the XDR memory pool, I wonder what its' latency to reach that is compared to the much closer GDDR. And if that would limit its usefulness in any way.
 
Rockster said:
For the PS3 I have no idea. Since we typically give bandwidth credit to RSX via FlexI/O to the XDR memory pool, I wonder what its' latency to reach that is compared to the much closer GDDR. And if that would limit its usefulness in any way.

No doubt there is a latency increase going to the XDR from RSX. Perhaps any lower latency on the XDR end may help offset that a little.

Not sure how that would impact things..I've heard it mentioned here recently that GPUs don't care as much about latency wrt memory access as they do sheer bandwidth, but I'm not sure how accurate that is. Presumably some things are more latency-sensitive than others, but I'm sure it'll be possible to manage things in an intelligent manner such as to mask presumably higher latency with XDR accesses from RSX (?)
 
They're both comparable, GDR is faster, but both are in the 500+ cycle range for a cache miss.
 
Why's Sony going with more expensive XDR then? Except, XDR's clocked fast, so 500 cycles is less real-world time.

:?:
 
Wicked_Vengence said:
Sony's using XDR because the Cell uses an XDR controller for the processor.

The controller could easily be switched with something else with only minimal reworking of the CPU -- they used XDR for a reason (be it deal from rambus or performance -- probably a bit of both).
 
I remember reading back in 2002 or 2003 that even Ken was concerned that Yellowstone might not be fast enough for their purposes. But if the usable bandwidth is closer to peak bandwidth compared to the GDDR3 like that article suggests, it bodes reasonably well for the main memory.
 
ninelven said:
Link

I don't know the validity of the article and how it pertains to the topic at hand, so please don't shoot the messenger.

EDIT: Victor Echevarria is RDRAM Product Manager for the Memory Interface Division at Rambus Inc. He joined Rambus in 2002 as a Systems Engineer. Prior to joining Rambus, Victor interned with Agilent Technologies, where he developed software for their high-speed digital sampling oscilloscopes. (at the end of the article)

The article certainly portrays XDR in a very nice light. Anyone care to comment? It seems as if it could have a very significant impact on the overall system performance.
 
In a follow up question, anyone know if its possible to do simultaneous reads from the same main memory location and simultaneous writes to different locations in main memory on the PS3?

What's the cost for copying memory from main memory to an SPE's local memory? I'm just worried about the situation where all 7 SPE's need to fetch or write main memory. If simultaneous reads/writes are not possible then I've got to wait 500*7 or 3500 cycles for the last SPE to get memory... Or am I thinking about the situation wrong?
 
as I understand it, multi-channel xdr ram is still being developed, so it looks like you might be right about that.
 
MechanizedDeath said:
colinisation said:
I believe Sony went with XDR because it has a higher bandwith per pin than GDDR.
Yeah, and fewer pins means less cost. I think this is what KK said they went with RDRAM for PS2 as well. PEACE.
Only if the RAM costs more in the first place, and I'm pretty sure there'll be more RAM chips in boxes than Cell chips, then savings on Cell will be eaten up by XDR's cost.

When we say latency is 500 cycles, is that CPU speed or RAM speed? In the PS3, XDR is at 3 GHz and DDR at 700 MHz, or there abouts, right? So if the 500 cycle latency is RAM speed, 500 DDR cycles is almost 2500 CPU cycles at 3.2 GHz CPU, whereas XDR is clocked at CPU speed so = 500 CPU cycles. Seeing as the transfer rate between high-clocked XDR and low-clocked DDR is so similar, that high frequency for XDR must be doing something!
 
AlgebraicRing said:
What's the cost for copying memory from main memory to an SPE's local memory?
Please define 'cost'.
I'm just worried about the situation where all 7 SPE's need to fetch or write main memory. If simultaneous reads/writes are not possible then I've got to wait 500*7 or 3500 cycles for the last SPE to get memory... Or am I thinking about the situation wrong?
CELL mem controller can handle 128 simoultaneous memory transactions in order to better hide/reduce memory latencies as more memory pages can be opened at the same time.
 
Shifty Geezer said:
When we say latency is 500 cycles, is that CPU speed or RAM speed?

Memory latency in devkit specs is usually quoted in CPU cycles, IE how many processor cycles the CPU will be stalled.

The real benefit of XDR is that given a certain bandwidth target, you can have fewer memory devices and pins because of higher bandwidth per pin, and therefore, lower hardware cost. (Which is counterbalanced by increased cost of the memory devices, but I assume the economics work out, otherwise Sony made a mistake in their design.)
 
Back
Top