I have a few Questons about PS3/XBOX360 RAM

BenQ

Newcomer
They both have 512 megs.

The 360 shares the 512 between the CPU and the GPU, but the PS3 apparantly splits it's RAM and has 256 for the Cell and 256 for the RSX.... but I've also heard that the RSX can use the Cell's RAM and the Cell can use the RSX's RAM if needed. Are there penalties for this?

What are the pros and cons for each approach?
 
Why did the developer from Bethesda(i think) say 360 had "nicer" ram?

ANyone know what he was talking about? I thought PS3 had the faster ram?

I'm also interested in the benefits of unified memory vs non unified.

Non unified gives you great overall bandwidth, but why not just make 4 banks of 128mb and have 100GB/s bandwidth? (25GB/s x 4)? There must be inherent drawbacks.
 
If I had my cynical hat on I'd say that the RAM on PS3 was split because the RSX was just hung off the end at the last minute.

There will be a penalty (mostly latency) for requesting memory from the other pool. The advantage is more aggregate bandwidth, although it remains to be seen if RSX can successfully hide the additional latency for say texture fetches.

As for the Bethesda comment, remember bandwidth isn't everything.
 
ERP said:
If I had my cynical hat on I'd say that the RAM on PS3 was split because the RSX was just hung off the end at the last minute.
This is not being cinical, this is being truthful ;)

although it remains to be seen if RSX can successfully hide the additional latency for say texture fetches.
or frambebuffer reads/writes.
Let's hope Nvidia customized G70 to handle much higher latencies..
 
scooby_dooby said:
I'm also interested in the benefits of unified memory vs non unified.
It's NUMA VS. UMA (.pdf), to be exact.

scooby_dooby said:
Why did the developer from Bethesda(i think) say 360 had "nicer" ram?
He probably had a crush on the X360 RAM, or something.

More seriously, UMA is easier to work with than NUMA from a developer standpoint, since you only have one memory pool.
 
The reason for the split in the PS3 is the Cells FlexIO controller. It is based on the Rambus XDR controller and can only support 4x512Mb (note:that is bits or 64MB) for a total of 256MB XDR RAM.
 
Wicked_Vengence said:
The reason for the split in the PS3 is the Cells FlexIO controller. It is based on the Rambus XDR controller and can only support 4x512Mb (note:that is bits or 64MB) for a total of 256MB XDR RAM.
That's an urban legend (Thanks to one article writer...).
Cell can adress 4GBs of XDR.
 
Wicked_Vengence said:
and can only support 4x512Mb (note:that is bits or 64MB) for a total of 256MB XDR RAM.
If you believe that, you must also believe the dual Cell workstation mobo shots IBM released a while back are complete fakes. :rolleyes:
 
Cons: More latency for each chip accessing the "other's" half, a little less dev friendly than UMA

Pros: More bandwidth, less bus contention (?), faster access for the CPU to its own half vs going through a GPU memory controller (presumably?)
 
ERP said:
If I had my cynical hat on I'd say that the RAM on PS3 was split because the RSX was just hung off the end at the last minute.
Makes me wonder, though, how much bandwidth the PS3 would have if MS went with 256 MB of RAM and Sony didn't go with nVidia.
 
Makes me wonder, though, how much bandwidth the PS3 would have if MS went with 256 MB of RAM and Sony didn't go with nVidia.
Last I heard it was 64MB of eDram on GPU.
There's also some horror rumours related to that GPU part though, so who knows what could have happened...

Personally I think that current memory config reflects the fact that GPU is a redesign of an existing part. Maybe more time would afford having all XDR instead of two ram types, but it would still be in basically the same config.
I gather that for something more radically different they'd need a GPU design that was built for it, not a modification of an existing part.

nAo has some cool ideas for how they should spend the Video decoder transistors though :p
 
Fafalada said:
Last I heard it was 64MB of eDram on GPU.

:oops:

edit: oh hey... that's enough for 1080p with 4x MSAA.... (using Dave's example calculations/table from his Xenos article)

There's also some horror rumours related to that GPU part though, so who


Mind sharing or pointing me in the direction of those rumours if they're on the internet? :D
 
Fafalada said:
Last I heard it was 64MB of eDram on GPU.
That would be quite funny given Sony's rheotic about eDRAM since the RSX was announced. :p

Actually, 64 MB sounds pretty expensive. Actually, really expensive.
 
I assumed they used the xdr for the cell because of the lower latancy and the cell needed that over bandwidth (though it provided alot of bandwidth also )


The gpus don't really care about latancy however and they just stuck gdr ram on it as the g70 already has a bus for that and gdr ram is cheaper .


What will be best in the end i dunno .
 
Wasn't Cell built around that XDR interface? I assume it would have had to have its own XDR pool regardless. Isn't the lower latency of XDR supposed to be beneficial for feeding the numerous SPE memory fetches?

There's no reason RSX couldn't have a 256bit bus to bump up its bandwidth. The part it's based on already has one. But Dave suggested it might be to aid future die shrinks. I still don't buy that RSX was a last-minute addition, though nAo and others might have more concrete insider info on this. But it's not shared beyond the past discussions that surfaced with the NVidia announcement late last year.

My question is, weren't Sony and NVidia already working on the dev software side of things prior to the RSX announcement? If NVidia really was doing work for Sony at that time (Cg shaders?), they could conceivably have proposed their design as a superior alternative to whatever Sony/Toshiba may have been cooking up at the time. PEACE.
 
I still don't buy that RSX was a last-minute addition, though nAo and others might have more concrete insider info on this. But it's not shared beyond the past discussions that surfaced with the NVidia announcement late last year.

I've always read that back in December Nvidia was working on the RSX for about 18 months. So when the PS3 comes out the RSX would have been in R&D for more than 2 1/2 years. Do you guys think that Nvidia is lying about this?
 
On the RSX: It may be that the "work" was really just design specifications to which the rsx was built. Other competing designs (toshiba) may have built to those specifications. the RSX part may not have been the "final" choice until recently however.
 
blakjedi said:
On the RSX: It may be that the "work" was really just design specifications to which the rsx was built. Other competing designs (toshiba) may have built to those specifications. the RSX part may not have been the "final" choice until recently however.

Yeah but wouldn't we need evidence to show that this is true before we just assume it? Aren't you just assuming that the choice was made recently? I view recently as early this year. Is that what you mean when you say recently?
 
Back
Top