An alternative PS3 for similar cost... a better PS3?

MBDF

Newcomer
OK, so I'm curious... Why didn't Sony go with 512MB of XDR, scrap the GDDR3, and allow both Cell and RSX to have access to 512MB of RAM.

Here are the benefits to this scenerio:

RSX with a 128 bit conection to XDR (same as GDDR3) would have had 50 GB's of available bandwidth, and to all 512 MB's of RAM at once.

How is this a bad thing... in fact, even this would free up bandwidth for Cell<>RSX FlexIO comunication.

How is it that I can think of this, and Sony Can't, honestly?

So here's how it would break down:

ps39nq.gif


Let me know what you think.... oh and by the way, the cost increase according to Merril Lynch would be negligable.
 
MBDF said:
How is it that I can think of this, and Sony Can't, honestly?


LOL Sony already get slated for releasing an expensive machine, that would have made it even more expensive! It's not like they haven't thought about it, they have, and in .02 seconds they realised it would cost too much.

Sometimes the easiest answer is the best.
 
Having RSX access memory through the EIB interface of Cell would:

A - permanently place a large load on the internal bus, stealing bandwidth from the SPEs.

B - add a lot of extra latency for memory accesses, screwing with many RSX operations and causing reduced performance. Particulary when Cell itself uses EIB heavily.

C - as others have already pointed out, XDR memory is a lot more expensive than GDDR. ;)
 
Guden Oden said:
Having RSX access memory through the EIB interface of Cell would:
MBDF didn't say access through Cell, but direct access.

That'd need nVidia to add a FlexIO or other XDR bus to RSX, and to double up the access. As Panajev points out, you're needing 512 MB RAM with 75 GB access. You can't really get that at the pound/99c shop!

Sony could have gone with 2 pools of XDR at 25 GB/s, giving them 3 GB/s more than the GDDR but in the same configuration. Or a 512MB pool of 50 GB/s as was suggested a long time ago, but it amounts to the same BW total either way and as I understand it would cost a lot more.
 
MBDF, that diagram doesn't make sense. How can two chips be directly connected to the same memory?

I suppose something like this may be possible (but difficult and slow) if you had some sort of synchronization going on over FlexIO to prevent any simultaneous access, but I've never seen anyone try this with any hardware in any application. There is always a single arbiter controlling access to any pool of memory, whether you're talking about consoles, PC's, MP3 players, cell phones, etc.
 
There is a simpler answer: NV was only going to leverage existing GPU technology (one of the reasons MS went with ATI) and asking them to create a completely new memory interface for XDR and whatnot was not in the gameplan.
 
Shifty Geezer said:
MBDF didn't say access through Cell, but direct access.
It can't have direct access any other way than through cell, because XDR doesn't allow hooking up the same memory ICs to two different memory controllers (the image in the OP shows RSX having direct access to all memory; not direct access to one separate XDR pool). And making RSX the memory controller and slaving Cell off of it would totally screw up the CPU/SPUs latency-wise.

Besides, it would require re-engineering RSX to a substantial degree, and might cause a big drop in performance. NV's engineers know the performance characteristics of DRAM very well by now, while XDR is a blank sheet in comparison. The GF7800 hardware might not work very optimally with the way XDR functions on a fundamental level.
 
Guden Oden said:
It can't have direct access any other way than through cell, because XDR doesn't allow hooking up the same memory ICs to two different memory controllers (the image in the OP shows RSX having direct access to all memory; not direct access to one separate XDR pool).
Well yeah, MBDF doesn't appear to appreciate that. Or (s)he was talking about some tech I'm not aware of. For the sake of the topic, the diagram is 'broken' and won't work as it is. Hence it's not a design Sony would consider!

Regards GPU's memory characterstics how senstive are they? Are they really timed down to the clock? I just assumed they 'banged on' a memory interface to a GPU and just run them at synchoronised clocks. The idea of designing the GPU around a memory source is one I hadn't through would be needing consideration.
 
How about replacing the GDDR3 with XDR at 128 then. This would give you a massive 50 GB's of transfer speed. More than double that of GDDR3, and at what? 5-10 bucks more?

PS2 had exclusively RDRAM, so I don't see the cost issue with all XDR.

The only hurdle I could see would be the afformentioned complications with G71 hardware, but hey... that can be tuned can't it.

It would seem that XDR is superior to GDDR3 in every way.
 
MBDF said:
How about replacing the GDDR3 with XDR at 128 then. This would give you a massive 50 GB's of transfer speed. More than double that of GDDR3, and at what? 5-10 bucks more?

PS2 had exclusively RDRAM, so I don't see the cost issue with all XDR.


The ps2 had only 32MBs of RDRAM which had was 32bit bus at 800mhz.

The ps3 uses 128MB of XDR with at 64bit bus running at 3.2Ghz.

Your saying the cost of moving to a XDR128 would only be an increase in the range of 5-10bucks based on what?
 
inefficient said:
The ps3 uses 128MB of XDR with at 64bit bus running at 3.2Ghz.
256MB, and a 64-bit differential bus (so 128 data pins in all). A doubling of the bus would need 256 data pins, a significiant increase...
 
Thread revival!!

What if Sony went with unified 512MB of GDDR3 memory like the 360? Would that have been feasible and a better solution? :)
 
Thread revival!!

What if Sony went with unified 512MB of GDDR3 memory like the 360? Would that have been feasible and a better solution? :)

Cell's FlexIO was designed around XDR iirc. Thus it would have had to have been a design decision much earlier in the process.

In theory there was nothing preventing a UMA design but the tradeoffs and design considerations would all need to be taken into consideration.
 
Cell's FlexIO was designed around XDR iirc. Thus it would have had to have been a design decision much earlier in the process.

In theory there was nothing preventing a UMA design but the tradeoffs and design considerations would all need to be taken into consideration.

I have heard that the RSX does have some access to the XDR though, currently as it is right? It hasnt been discussed much in the board AFAIR but I recall it was assumed that there is additional latency of the RSX accessing the main memory. But then again XDR is a faster memory than the GDDR3 so can it compensate for the expected latency?

I am kind of surprised that despite the lack of memory flexibility and the bigger OS footprint in the PS3, multiplatform games are extremely close, and some games like Uncharted 2 and 3 dont appear to be very limited by memory constraints edit: So maybe...maybe I say their architectural decisions probably at the end paid off and a unified memory wouldnt have done much difference?
 
Wouldn't higher latency memory make it more difficult to balance SPUs and provide them with new data to process? PPU (if I got my TLAs correct) would have to be more accurate in predicting when additional data for the SPUs is needed. This burden however would be offset by the fact, that it'd be easier to offload some work from RSX (post-processing for example). But if you save money by not using XDR you could go with better GPU, so this would become a moot point.
 
Wouldn't higher latency memory make it more difficult to balance SPUs and provide them with new data to process? PPU (if I got my TLAs correct) would have to be more accurate in predicting when additional data for the SPUs is needed.
Latency shouldn't affect SPUs as they have to prefetch data anyhow. Perhaps they'd need a little more cached data to hold a little more ahead of time - an extra 4 ms latency would need an extra 4 ms of work data to prevent the SPU idling. And the PPU isn't involved. The SPUs have memory autonomy via their memory flow controllers. The SPUs are complete processors in their own right.
 
That's pretty much what I was referring to: you've got to anticipate what the next data is and fetch it earlier if you've got a higher latency memory. I know that SPUs can read from shared memory by themselves but I assumed that PPU can enqueue DMA command on their behalf and advanced schedulers could use that to mask latency even further. Is that assumption incorrect?
 
A better design would of been more memory rather than unifying it. It sounds obvious and yes it would cost more money, but many games could of looked better. Id be interesting to know how much it would of cost to give it 512meg of Vram
 
Considering RSX and its GDDR3 were MCM, it would probably have cost a lot more than you realize. There simply was no room to put more chips in there as 1Gbit GDDR3 didn't exist. You would have had to redesign the entire motherboard design and layout. The PS3 was late enough as it was, so "simply" adding more RAM would have delayed the device even further.
 
Back
Top