PS3's CPU, GPU, RAM and eDRAM configuration?

So which option would be good for PS3, A, B, C or D?

  • B

    Votes: 0 0.0%
  • C

    Votes: 0 0.0%
  • D

    Votes: 0 0.0%
  • Other

    Votes: 0 0.0%

  • Total voters
    71
version said:
what you mean this?

sp.JPG

Basically that would be.... D.

If the two 256MB pools were one single 512MB pool, it would be E. And much better if u ask me, cause then the developer would have choice over how much memory is being used for graphics and how much for game data.
 
london-boy said:
Jaws said:
Then what your describing is 'D.' and not 'C.' then...this is what I don't get about 'E.'...it's like a hybrid 'C.' and 'D.' but I can't see how?

:D You said it was C!
It's not even D. It could be D if the developers decided to reserve 256MB of the main pool for the CPU and the rest for the GPU.
E is more flexible, u're not bound by D's 256MB limit for either processor. Want to use mroe than 256MB for graphics data? Just cut on the game data.

See the difference?

In the end, it's very unlikely we'll get 512MB so it will probably just be E but the large pool having 256MB instead of 512. ;)

This is why I asked the question if 'E.' is even possible...remember two way comms between CPU<==>GPU for all cases above, sorry if this wasn't clear but that's what the arrows mean... :D ...In that case what your're describing is 'D.', no?
 
Jaws said:
This is why I asked the question if 'E.' is even possible...remember two way comms between CPU<==>GPU for all cases above, sorry if this wasn't clear but that's what the arrows mean... :D ...In that case what your're describing is 'D.', no?

See above. It's a slight difference.
 
london-boy said:
Jaws said:
This is why I asked the question if 'E.' is even possible...remember two way comms between CPU<==>GPU for all cases above, sorry if this wasn't clear but that's what the arrows mean... :D ...In that case what your're describing is 'D.', no?

See above. It's a slight difference.

Okay, above is D, right? ...I can't see how 'E.' is possible unless there's only one 'XIO' with memory hanging off it, right?
 
Jaws said:
london-boy said:
Jaws said:
This is why I asked the question if 'E.' is even possible...remember two way comms between CPU<==>GPU for all cases above, sorry if this wasn't clear but that's what the arrows mean... :D ...In that case what your're describing is 'D.', no?

See above. It's a slight difference.

Okay, above is D, right? ...I can't see how 'E.' is possible unless there's only one 'XIO' with memory hanging off it, right?


... It's quite simple really.
On D, the maximum graphics data you can have is 256MB. If u need more, u'll have to dip into the CPU pool, potentially slowing down things (since data would have to be DMAed and come through the CPU, which is the only link between the GPU and the CPU's pool of 256MB).

On E, both the CPU and the GPU can access a total of 512MB RAM arbitrarily. So in case there is more than 256MB data for graphics, accessing it won't slow down anything.
 
london-boy said:
Jaws said:
london-boy said:
Jaws said:
This is why I asked the question if 'E.' is even possible...remember two way comms between CPU<==>GPU for all cases above, sorry if this wasn't clear but that's what the arrows mean... :D ...In that case what your're describing is 'D.', no?

See above. It's a slight difference.

Okay, above is D, right? ...I can't see how 'E.' is possible unless there's only one 'XIO' with memory hanging off it, right?

...

On E, both the CPU and the GPU can access a total of 512MB RAM arbitrarily. So in case there is more than 256MB data for graphics, accessing it won't slow down anything.

Okay, where is the 'XIO' for the memory located?
 
london-boy said:
Jaws said:
Okay, where is the 'XIO' for the memory located?

Asking me? We're just speculating on the difference configurations PS3 might have, not on how they are actually manufactured ;)

This is not wild speculation! :D ...These are high level possibilities and all are feasible designs. This is why I don't undertsand how E is possible and why I asked the question.

The XIO is the memory interface designed by Rambus so if the CPU or GPU are going to have RDRAM, then it will be off an XIO or similar interface.

I could see an E type in the form of a CELL die, where CELL = CPU + GPU. The XIO would be on the CELL and the shared memory hanging off it would describe E, no?

Otherwise E is either a C or a D depending on how you define it, no? :D
 
Jaws said:
This is not wild speculation! :D ...These are high level possibilities and all are feasible designs. This is why I don't undertsand how E is possible and why I asked the question.

The XIO is the memory interface designed by Rambus so if the CPU or GPU are going to have RDRAM, then it will be off an XIO or similar interface.

I could see an E type in the form of a CELL die, where CELL = CPU + GPU. The XIO would be on the CELL and the shared memory hanging off it would describe E, no?

Otherwise E is either a C or a D depending on how you define it, no? :D

Well i'm not too knowledgeable on this XIO business, but having one bus to the RAM from CPU and one bus to the same pool from the GPU is feasible, unless i'm missing something.
D has one pool, one bus to the CPU and a separate pool with its own bus to the GPU.
C has one pool, one bus to the CPU and texture data needs to go through the CPU.
E has one pool, 2 busses, one for CPU and one for GPU. If it's feasible, this should be the best option.
 
london-boy said:
Jaws said:
This is not wild speculation! :D ...These are high level possibilities and all are feasible designs. This is why I don't undertsand how E is possible and why I asked the question.

The XIO is the memory interface designed by Rambus so if the CPU or GPU are going to have RDRAM, then it will be off an XIO or similar interface.

I could see an E type in the form of a CELL die, where CELL = CPU + GPU. The XIO would be on the CELL and the shared memory hanging off it would describe E, no?

Otherwise E is either a C or a D depending on how you define it, no? :D

Well i'm not too knowledgeable on this XIO business, but having one bus to the RAM from CPU and one bus to the same pool from the GPU is feasible, unless i'm missing something.
D has one pool, one bus to the CPU and a separate pool with its own bus to the GPU.
C has one pool, one bus to the CPU and texture data needs to go through the CPU.
E has one pool, 2 busses, one for CPU and one for GPU. If it's feasible, this should be the best option.

Okay, it's this aspect I don't think is feasible, unless it's a CELL = CPU+GPU type device as decribed above. Hence my original question on E... But if anyone else could elaborate on how E might be possible then feel free...

Sorry, It's Friday and I was bored! :D
 
Jaws said:
Okay, it's this aspect I don't think is feasible, unless it's a CELL = CPU+GPU type device as decribed above. Hence my original question on E... But if anyone else could elaborate on how E might be possible then feel free...

Sorry, It's Friday and I was bored! :D

You're bored? You dont know what boredom is. :D

Anyway, i think if E can be done, it should be the best option. If it can't, then D is the best. The GPU needs to be able to access *a* pool of memory, whatever pool it is (either a big shared one or its own separate VRAM).
 
Jaws said:
But if anyone else could elaborate on how E might be possible then feel free...

You'd need an arbiter for the main memory (DRAM devices are dumb), ie. a northbridge.

And it would be slower.

The NUMA configs are the fastest, there's no bandwidth penalty going to the other IC's DRAM because the bisectional bandwidth of the system (the Flex I/O) is high enough to sustain inter-IC communication at full (main memory bandwidth) throttle, and there's only a slight latency penalty when adressing the other IC's DRAM.

Cheers
Gubbi
 
london-boy said:
Jaws said:
Okay, it's this aspect I don't think is feasible, unless it's a CELL = CPU+GPU type device as decribed above. Hence my original question on E... But if anyone else could elaborate on how E might be possible then feel free...

Sorry, It's Friday and I was bored! :D

You're bored? You dont know what boredom is. :D

Anyway, i think if E can be done, it should be the best option. If it can't, then D is the best. The GPU needs to be able to access *a* pool of memory, whatever pool it is (either a big shared one or its own separate VRAM).

Bored my arse! ...Your boredom is relieved by your post count here! :D

I agree E is the best option, IMO...with a CELL chip (CELL=CPU+GPU) as described above with memory hanging off the XIO. And better still, scalling with multiple chips for required performance. 8)

However, in my poll, I've assumed two distinct IC's,

i.e. a CELL CPU IC and a NV-SONY GPU IC.

Strangely in my ARS thread, Hannibal posted this,

Hannibal said:
...
Also, I've seen Moab here and elsewhere talking up the idea that the SPEs aren't for use in rendering. In this he is most assuredly wrong. As Scott Wasson at TR has pointed out more than once, the SPEs are esentially pixel shaders and they will be used for the rendering pipeline. Furthermore, IBM themselves stated in the presentation that they consider the CELL to be a combination of a CPU and GPU. The IBM rep also answered a question about using these for rendering and discussed the fact that SPE peer-to-peer communication over the EIB, in combination with local storage, means that you can flexibly assign different SPEs to different parts of the rendering pipeline.
...

http://www.beyond3d.com/forum/viewtopic.php?p=460025#460025

Now, I'm still waiting for Hannibal to clarify this... :? ...But I doubt this will be the form of the PS3...
 
No, i'm pretty sure there will be a CPU and a separate GPU.
Cell is already big as it is, making a single with one or more Cells AND whatever NVIDIA comes out with is a bit much...
 
Back
Top