RSX evolution

Being a piece of crap how? and why would the clock frequency need to be raised?

I have the old E3 05 press conference where it states the 550Mhz speed however according to the website linked in previous posts it says that clock speed is 500Mhz with the pixel shaders clocked at 550Mhz so it seems to hardly make any difference specially since the chip is already customized for its job as a console GPU that is not supposed to overheat and destroy the motherboard or consume too many watts

I was implying that with a die-shrink the RSX could still use a relatively high amount of power. Also, about the clock speed ... as you said the core was originally meant to be at 550MHz but has been reduced to 500 for whatever reason (temps?). If it was bumped up to 550MHz you never know if you'd see a few more FPS here and there. Never know without trying.
 
If it was bumped up to 550MHz you never know if you'd see a few more FPS here and there. Never know without trying.

Sounds like a dumb idea, especially if a game ended up needing the extra frames you hypothetically get from the 50MHz extra, what about all those with only 50MHz clocked GPU?
 
Honest truth is, that either will sometimes happen or not at all. Sony does not enforce either way regarding for example the PSP. Some games utilize the extra RAM in the newer editions while those games then run more or less noticeably "worse" on the older models. Not to say they run unacceptable on any one model, but, there will be a difference then.

Doesn't sound "dumb" to me - it's not like anyone is losing anything, only newer models/purchasers gain something.
 
I was implying that with a die-shrink the RSX could still use a relatively high amount of power. Also, about the clock speed ... as you said the core was originally meant to be at 550MHz but has been reduced to 500 for whatever reason (temps?). If it was bumped up to 550MHz you never know if you'd see a few more FPS here and there. Never know without trying.

But the PS3 is not a PC where such overclocking matters, on consoles game devs just write the code for the standard hardware.

Temps is also not really the indicator of why it may have been changed, it most likely was yields and power consumption as well as not being able to provide a higher stepping/core revision probably due to not being able to simply throw out undesirable GPU wafers.

The other thing to consider about the stepping/revision subject is that they may have ran into leakage issues, currently the number one reason why cpus and gpus on PC have been so slow to increase performance, aside from lack of competition.

Whatever the case the current limitation is the game developers themselves coming to grips with the CellBE+RSX way of programming and managing SPEs so any framerate slowdowns are really just things that game devs will learn how to manage, I mean look at the XBox 360 being nearly in its third year and there are still new games that have frame rate dips here and there and most X360 games end up ported to PC anyways.
 
If the core speed were ever increased, it would be as a side-effect of using a much smaller process node. Maintaining the original clock speed gets tricky as you continue reducing the process node for a fixed design due to various transistor parameters, especially if you want to also reduce power consumption. There's a balance to be made between clock cycle, operating voltage, and threshold voltage ultimately figuring into static and dynamic power.

The reduction in operating voltage and thus active power consumption is a necessity born out of how transistors are made and function; Vthreshold is the culprit. It would not be good if a particular circuit of transistors were always turned on or off because the threshold was always exceeded. That's not to say you couldn't intentionally design for low power, but the threshold voltage is usually reduced with a smaller transistor, leading to an exponential rise in static/idle power consumption; this is rather counterproductive for thermal density. Some solutions to the Vth issue may be different gate materials or different dopant concentrations or even different transistor design altogether, all of which open even more cans of worms.

If the situation with leakage is dire (i.e. can't do anything about it), 1) don't bother or 2) increase clock frequency because the leakage current and parasitic capacitances cause an increased discharge rate at the output of the transistor, resulting in errors; as the voltage supply is decreased, the system will be even more sensitive to fluctuations. The higher frequency will help keep the output at what it should be.


Temps is also not really the indicator of why it may have been changed, it most likely was yields and power consumption as well as not being able to provide a higher stepping/core revision probably due to not being able to simply throw out undesirable GPU wafers.

Temperature/heat dissipation is related to yields. Recall, not all chips are equal hence the binning often seen in the PC side. But yes, considering the launch of G71in March 2006, it's not too surprising; the highest grade @ stock 650MHz required a hell of a heat heatsink and fan combination in a much more open chassis . Even the GT/GS @ 450 had rather hot single-slot solutions.
 
So, basically you are saying that it could be better or even become necessary to increase the clock frequency of rsx when shrinking it further?
 
So, basically you are saying that it could be better or even become necessary to increase the clock frequency of rsx when shrinking it further?

Hardly. The reason for doing so is to get past the problems caused by output degradation (be it leakage or parasitic capacitances). It's better to mitigate that instead due to the exponential rise in thermal density. I'm only attempting to explain the reasoning behind such a decision if it were ever made. It's the last resort if leakage cannot be further reduced or if the design teams determine it's not worth the effort. There are other things to consider such as increased supply requirements (e.g. overclocking + increasing voltage, increased power consumption), which go against one of the reasons for even going to a smaller process. Higher frequency does also have some noise implications, although the relatively low frequency of these console GPUs shouldn't be an issue.

Of course, doing both would be nice, but deviating from the base spec is simply a no-no in the home console arena.

Recall that even the later PS2's CPU gained a rather minor bump in frequency; the chips are already so bloody small that the gains of working out shrink issues versus a 5MHz bump should be an easy decision to make.
 
According to this post/thread by Forum Member one:

SCE confirm 80GB PS3 in Japan (Oct. 30) has 65nm RSX

All 80GB PS3 that are going to be out in Japan on Oct 30th and after have 65nm RSX.

Do you think the new European 160GB Bundle, which will be availabe at pretty much the same time (31-Oct-2008), will have the new 65nm Nvidia RSX GPU, too?

And what do you guys think about the chances for the new 65nm Nvidia RSX GPU for the European Little Big Planet 80GB Bundle, which will be available just a few days later?

Furthermore: this news pretty much should mean, that there haven't been any PS3 consoles with the new 65nm Nvidia RSX GPU on sale, yet, shouldn't it?
 
Last edited by a moderator:
Back
Top