Observations, thoughts and questions about X360 and PS3

nAo said:
Unfurtunately there're are too unknown quantities/features at this time..
I don't think we have the full picture yet.

Sounds to me that even the developers are being left in the dark on a lot until they get their final devkits in December.
 
Last edited by a moderator:
ERP said:
Not saying this is or isn't possible but.....

You really have to look beyond bandwidth for these things. It's a requirement obviously, but for it to be practical to put the framebuffer in XDRam the destination blender in RSX would have to be able to absorb the additional latency. And that's simply not a known quantity.

As Deano said earlier, the speculation in these threads is interesting, but your speculating with far from complete knowledge, and there is a tendency to get fixated on one or two technical numbers, coupled with quotes taken out of context.

True, I was tending to ignore latency. Although it did cross my mind that if XDR's behaviour in this regard is a little better than GDDR3, it might help balance out a little.

You could flip it over and have the framebuffer in GDDR3, but I was putting it in XDR because of the greater amount of bandwidth there. With my same figures above, there'd still be some GDDR3 BW left over for other things, but it seems a little odd to me to have so little going into relatively so much memory. That also assumes then texture/vertex reads etc. are less bandwidth sensitive and could sustain any greater penalty going over to XDR..

But yeah, it is just playing with numbers for now. There are a number of factors that could render all this moot (hopefully in a good way ;)).
 
What's more latency sensitive, vertex/texture fetches or framebuffer read/writes? I mean, is it even necessary to read the framebuffer data that often unless you're doing lots of RTT or other environment-mapping ops? There was some prior speculation on how the memory should be partitioned in PS3, and I thought I remembered reading that GDDR3 would be better for the frame buffer, with XDR for texture/vertex data. PEACE.
 
MechanizedDeath said:
I mean, is it even necessary to read the framebuffer data that often unless you're doing lots of RTT or other environment-mapping ops?

The most consistent offender in terms of readbacks that crossed my mind was for the z-buffer, every time you make a comparison it'll need to read the current value in the zbuffer. Which is why I was saying earlier that reducing depth complexity could be very worthwhile, via the CPU and/or GPU (not too familiar with what the GPU offers here?).

There are probably other ops that require a decent amont of reading back, depending on what you're doing.
 
Titanio said:
True, I was tending to ignore latency. Although it did cross my mind that if XDR's behaviour in this regard is a little better than GDDR3, it might help balance out a little.

You could flip it over and have the framebuffer in GDDR3, but I was putting it in XDR because of the greater amount of bandwidth there. With my same figures above, there'd still be some GDDR3 BW left over for other things, but it seems a little odd to me to have so little going into relatively so much memory. That also assumes then texture/vertex reads etc. are less bandwidth sensitive and could sustain any greater penalty going over to XDR..

But yeah, it is just playing with numbers for now. There are a number of factors that could render all this moot (hopefully in a good way ;)).

I was more in the line with having the framebuffer in the Vram an use a "locked" say 64MB(for ex) of the XDR for say normalmaps and other texturedata. There could be a possibilty to send parts of the framebuffer to do postprocessing with Cell, thats what i se as advantage with numa but uma is easier to work with. Then how much that is hype and what can be done is of course upp in the air but its interesting to guess and speculate none the less.
 
Back
Top