RSX architecture and cost...

Well my arguments' contents don't have enough in them to facilitate a long lasting discussion,
but I'm looking for opinions so:

1. Would it have been viable for Sony to have Nvidia design a 256 bit BUS to the GDDR3 while keeping that 128 bit to the XDR? How expensive would it have been?
2. In your opinion would it have been wiser to have 512 mb pool of GDDR3 RAM for both the CELL and RSX, with the inclusion of a 256 bit BUS?

Thanks in advance, ;)
 
1. yes, too much
2. may be but rsx would have had some edram for frame buffer operation.
it could have been painfull to do all the tecturing from the main ram throught the cell=> more latencies to deal with.
And probably worse cpu performance as latencies are highter with gddr3 than with xdram.
 
1. Would it have been viable for Sony to have Nvidia design a 256 bit BUS to the GDDR3 while keeping that 128 bit to the XDR? How expensive would it have been?
2. In your opinion would it have been wiser to have 512 mb pool of GDDR3 RAM for both the CELL and RSX, with the inclusion of a 256 bit BUS?
1 -- RSX's link to XDR isn't 128-bit, it never has been, and all cases where someone said it was 128-bit are people assuming that it's bandwidth inherently implies that it must be 128-bit because other buslinks in the same general bandwidth range are that wide. CELL's XDR bus is 64-bit, and the FlexIO link is also 64-bit (and that's 8 lanes, 8bits each, at 5 GB/sec each or so it would seem)... RSX gets 7 of those 8 lanes, making for a 56-bit bus (and that's both directions put together).

As for would it have been viable? Possible, yes. Cheap, no. Especially if you assume that RSX was meant to have that GDDR-3 on the same package with the GPU... because of the nature in which the components are manufactured, a wider GDDR-3 bus usually means more DRAM devices, which is invariably more expensive by far than RSX's core even if each DRAM is lower capacity.

2 -- No, never. Latency is the bane of all performance, and GDDR-3 is high latency and putting it behind the wall known as RSX makes it a thousand times worse, to say nothing of the fact that a GPU is a very bandwidth hungry device, so it's going to be monopolizing that bus pretty much all the time, which makes matters worse for the CPU, which makes it a trillion times worse... CPUs are far more sensitive to latency than GPUs since there's no guarantee of work filling in those latencies, and even if you did, there's always dependencies between contexts which, again, isn't the case with GPUs. Similarly, because of all those SPEs doing work that affects other work, you end up with Cell being all the more dependent on getting loads of small chunks of data "Right Now." So while there are disadvantages, split memory pools works out as the lesser of a thousand evils.

Though I wouldn't have minded having 512 MB just in the RSX pool, though even more so, I would have liked it on the CPU side. Software can never have too much RAM. There's no such thing.
 
2 -- No, never. Latency is the bane of all performance, and GDDR-3 is high latency and putting it behind the wall known as RSX makes it a thousand times worse, to say nothing of the fact that a GPU is a very bandwidth hungry device, so it's going to be monopolizing that bus pretty much all the time, which makes matters worse for the CPU, which makes it a trillion times worse... CPUs are far more sensitive to latency than GPUs since there's no guarantee of work filling in those latencies, and even if you did, there's always dependencies between contexts which, again, isn't the case with GPUs. Similarly, because of all those SPEs doing work that affects other work, you end up with Cell being all the more dependent on getting loads of small chunks of data "Right Now." So while there are disadvantages, split memory pools works out as the lesser of a thousand evils.

This is not a flame-bait:

Isn't that how the X360 has it though? GDDR3 ram pool shared between CPU and GPU? I was not aware that there were such negative implications...
 
Because Pixel data is the most bandwidth intensive operation of a GPU (texturing comes second) and virtually any pixel operations are pushed off to the EDRAM for 360. Its bascially the EDRAM that makes it feasible to use GDDR3 as system RAM.
 
Because Pixel data is the most bandwidth intensive operation of a GPU (texturing comes second) and virtually any pixel operations are pushed off to the EDRAM for 360. Its bascially the EDRAM that makes it feasible to use GDDR3 as system RAM.

Point taken about the texturing vs pixel data bandwidth. However, is it not still true that GDDR3 is high latency (latency1) and Xbox360's CPU needs to go through the GPU to access it (latency2), which in itself still uses GDDR3 for at least texturing (latency3)?

EDIT: Furthering the point, would not rendering at 1080p (theoretically) demand more texture bandwidth, thus more GPU<->GDDR3 time and incure more latency again compared to 720p? What implications would there be, if any? I would think this at least makes 1080p more impractical on Xbox360 than on PS3 at least in some, if not most situations because CPU processing would be hindered (this is of course assuming that the latency is indeed as big of a problem as I'm thinking)...
 
Last edited by a moderator:
Point taken about the texturing vs pixel data bandwidth. However, is it not still true that GDDR3 is high latency (latency1) and Xbox360's CPU needs to go through the GPU to access it (latency2), which in itself still uses GDDR3 for at least texturing (latency3)?
Latency is relative - GDDR3 is higher latency than, say, standard DDR, but then GDDR3 is also running many more clock cycles. So, yes, GDDR3 can waste more clocks accessing data but it also has more clocks to burn in the first place. The setup for 360 is little different from an Intel CPU setup, other than the "integrated graphics" is a very high performance graphics processor and its not using the central pool for pixel data.

EDIT: Furthering the point, would not rendering at 1080p (theoretically) demand more texture bandwidth, thus more GPU<->GDDR3 time and incure more latency again compared to 720p? What implications would there be, if any? I would think this at least makes 1080p more impractical on Xbox360 than on PS3 at least in some, if not most situations because CPU processing would be hindered (this is of course assuming that the latency is indeed as big of a problem as I'm thinking)...
Texturing isn't really that relevant - texture data is fairly invariant to render resolution (unless the artist includes higher res textures for higher resolution) as the source data is the same. All that changes is that more pixels need to be covered, but your texture cache coherancy increases at higher resolutions so you are just reusing that source data from the texture cache more, rather than going back to main memory to get it.
 
Point taken about the texturing vs pixel data bandwidth. However, is it not still true that GDDR3 is high latency (latency1) and Xbox360's CPU needs to go through the GPU to access it (latency2), which in itself still uses GDDR3 for at least texturing (latency3)?
Sure, and sure enough, the CPU isn't super-happy about it either, but XeCPU is not necessarily as sensitive as Cell for various reasons, not the least of which have to do with the fact that there are fewer cores and they all have access to cache (whereas SPEs can only access their local store pools and main memory). Yes, you can have theoretically many many thread contexts, but chances are good that no more than 3 will be all too demanding all the time, and the rest will just be doing small jobs or waiting for other threads.

EDIT: Furthering the point, would not rendering at 1080p (theoretically) demand more texture bandwidth, thus more GPU<->GDDR3 time and incure more latency again compared to 720p? What implications would there be, if any? I would think this at least makes 1080p more impractical on Xbox360 than on PS3 at least in some, if not most situations because CPU processing would be hindered (this is of course assuming that the latency is indeed as big of a problem as I'm thinking)...
Maybe, maybe not. In general, the number of textures isn't really an issue as you go up in resolution, it'll be the resolution of those textures. And it's often NOT the case that resolution of textures is right on the threshold of "what looks good at screen resolution x." It's usually either much too low-res or comfortably more than enough. Little will change image-quality-wise at higher screen res.

As far as having more pixels to fill, and therefore more texture reads, the thing is that the geometry and the textures are still covering the same area of screen space for the same frame at any resolution (difference is how many pixels are in that area). So if 1 texture block of 2x2 texels is necessarily read to fill, say, 5 pixels at 720p, then that same block will be read to fill 10 or 11 pixels at 1080p. If that block is still in the texture cache when you get around to various pixels among that 10 or 11 pixels, you haven't issued more reads from RAM than you would have otherwise. Now if you've increased the texture resolution for 1080p, that's a different story.
 
Thanks for the replies :)

My 1080p extrapolation was indeed based on my amateur assumption that texture resolution scales with render resolution. I see now that it was wrong :)
 
Well my arguments' contents don't have enough in them to facilitate a long lasting discussion,
but I'm looking for opinions so:

1. Would it have been viable for Sony to have Nvidia design a 256 bit BUS to the GDDR3 while keeping that 128 bit to the XDR? How expensive would it have been?
2. In your opinion would it have been wiser to have 512 mb pool of GDDR3 RAM for both the CELL and RSX, with the inclusion of a 256 bit BUS?

Thanks in advance, ;)
1.)would have been feasable... not much to add to what they're selling the PS3 now, but it would've hindered them for future price reducing manufacturing costs in terms of simplifying the components/design.

2.) AFAIK, the XDR was used because of low latency which the cell needs to operate more efficiently.
 
1.)would have been feasable... not much to add to what they're selling the PS3 now, but it would've hindered them for future price reducing manufacturing costs in terms of simplifying the components/design.
One possibility would have been to use a 256-bit bus on 1GHz GDDR3 now, and a 128-bit bus on 2GHz+ GDDR4 tommorow. This would most likely have been a significant extra cost, and would have required NVIDIA to redesign the RSX memory controller for GDDR4 eventually... It would also have made Sony reliant on GDDR4 ramping speed and prices for future cost reductions.


Uttar
 
lacking both a 256-bit bus or EDRAM was a huge, huge mistake IMO. at least one of these two options should've been used.

oh well, at least there's going to be a PS4. developers are already talking about it (Evolution guys).
 
lacking both a 256-bit bus or EDRAM was a huge, huge mistake IMO. at least one of these two options should've been used.

oh well, at least there's going to be a PS4. developers are already talking about it (Evolution guys).
Except bandwidth hasn't proved yet to be a problem on PS3, I haven't heard a single developer finding it a serious bottleneck.
In the rare cases you want to consume more bandwidth than GDDR can supply, you put some textures in XDDR and the problem goes away generally. There a few genuine cases where more would have been nice (double dumped Z) but thats about it I think.
 
Is that because devs are avoiding BW hogs though? eg. AA at 1080p. Are devs finding there's enough BW to manage AA at higher resolutions, or has that been sacrificed? Or even, is the rest of the system as much a bottleneck for rendering as AA, so it's not BW that's the limiting factor?

For some BW consumers, the problem may well be alleviated with Cell managing those tasks on XDR, which was always on the cards. I'd have thought there'd still be sacrifices made though. Looking at PS3, the weakest link seemed to be the BW. Do you feel that actually the system is very well balanced in hardware with no particular shortfalls? Hmmm, that may not be a politic question for you to answer ;)
 
Cool too hear!!

Except bandwidth hasn't proved yet to be a problem on PS3, I haven't heard a single developer finding it a serious bottleneck.
In the rare cases you want to consume more bandwidth than GDDR can supply, you put some textures in XDDR and the problem goes away generally. There a few genuine cases where more would have been nice (double dumped Z) but thats about it I think.

100% agree!!


The only thing that I ever heard problematic about the PS3 was that the dev tools are still improving, the Cell will take at least 2-3 years to fully utilize its strengths, and that the online service may not be as great compared too the Xbox 360.

Other than that…PS3 seems far more balance compared to the PS2 at this stage.
 
Nerve-Damage said:
The only thing that I ever heard problematic about the PS3 was that the dev tools are still improving, the Cell will take at least 2-3 years to fully utilize its strengths, and that the online service may not be as great compared too the Xbox 360.

Other than that…

Excuse me, are you being sarcastic?

You remind me of

Reg said:
All right, but apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, a fresh water system, and public health, what have the Romans ever done for us?
 
Excuse me, are you being sarcastic?

No....

I agree 100% with DeanoC. :cool:

The only bad press that I ever heard from PS3-Devs was dealing with the length of time too fully utilized the Cell, the dev-tools are still maturing, and that the online service isn’t quite up to par with the Xbox 360.

Those things listed above aren’t that bad compared to supposed and unsupported claim that the PS3 has bandwidth issues or memory size (lack of).
 
I dont see how you managed to interpret DeanoC statement as "there is no need for more bandwidth".

Developers arent complaining about the bandwidth because they know the limits of the system, and therefore your going to constrain your game within thouse limits. No developer is going to aim for 8xAA and 16FP HDR @ 1080p on a PS3, because he know how much the system will need bandwidth wise, and therefore he will try to keep himself within thouse limits.

Developers will naturally avoid bandwidth hogs, that doesnt mean the system wouldnt benefit from it.

No developer will complain about only being given 512mb ram, because there is no point in complaining, Sony will not magically make a 1gb appear in every ps3 out there. There is no point for a developer to complain about memory size or bandwidth, because it will not change anything.

Would the system benefit from more bandwidth\memory or anything else thats better? Ofcourse, the PS3 (nor any other console) is perfect, there is allways room for improvement.
 
I dont see how you managed to interpret DeanoC statement as "there is no need for more bandwidth".

Developers arent complaining about the bandwidth because they know the limits of the system, and therefore your going to constrain your game within thouse limits. No developer is going to aim for 8xAA and 16FP HDR @ 1080p on a PS3, because he know how much the system will need bandwidth wise, and therefore he will try to keep himself within thouse limits.

Developers will naturally avoid bandwidth hogs, that doesnt mean the system wouldnt benefit from it.

No developer will complain about only being given 512mb ram, because there is no point in complaining, Sony will not magically make a 1gb appear in every ps3 out there. There is no point for a developer to complain about memory size or bandwidth, because it will not change anything.

Would the system benefit from more bandwidth\memory or anything else thats better? Ofcourse, the PS3 (nor any other console) is perfect, there is allways room for improvement.

Actually they did.
 
Back
Top