GTX512/RSX analysis

You cannot compare power figures across architectures they are architecture specific. Just as clockspeed is architecture specific.
 
Xenus said:
You cannot compare power figures across architectures they are architecture specific. Just as clockspeed is architecture specific.

Right, no one disagrees with you. But it would be wrong to say that extrapolation in such an instance is useless when one of the architectures is a known derivative of the other.
 
I was speaking more to the comments of power figures on the A64 vs. P4. And jvd insistance that since the x1800xt was power hungry at 90nm. So the RSX would be power hungry at 90nm.
 
I agree with you Xenus, but I guess my point is that voltage is rather misleading. It also socketed in with jb's insinuation that because one design is power hungry, all others on that process must be as well.
 
For those arguing against Sony doing a spec's boost as 'what do they get', didn't they do exactly that with PS2? Why'd they boost it from the inital specs? Because they could at minimal extra cost.
 
Each process has its own design rules. And, the design rules not only depends on the process but also the foundary as well. Going from 110 nm to 90nm is not a simple process shrink. On the top of that, TSMC is not the one who will manufacture the RSX. So, it is not meaningful to take the power measurements of G70(at 130/110nm) and R520(manufactured at TSMC) and extrapolate for RSX.

That being said, 90nm process has a lot more problems and design issues than 130 nm. The low-k dielectric becomes a must. The second major issue is the leakage current* which is becoming worse with each process shrink. Most of IC companies are actually come up with different IC design strategies (or so called design rules) to combat with this issue. And yet, almost each 90nm design (except relatively slower low-power designs) still has an awful lot of heat problem.

I think the speed of RSX will more or less stay around 550 Mhz. There are more reliable tools to estimate things like power consumption nowadays that we have 5 years ago.. Sony might just underestimate the power consumption 5 years ago, and then realized that they can bump up the speed of chips without exceeding their power/heat budgets. However, I think they might even had a more clear picture this time, and that is why they set it to 550 Mhz back in E3.

*Leakage current: The current that is passed through two nodes of the transistor must be zero when it is in OFF state. However, although it is small, it is never zero, hence the name leakage current. Shrinking the transistor size actually makes this current increasing (with order of magnitude in some cases).. Therefore, the overall power requirement of a chip is increased.
 
silhouette said:
Each process has its own design rules. And, the design rules not only depends on the process but also the foundary as well. Going from 110 nm to 90nm is not a simple process shrink. On the top of that, TSMC is not the one who will manufacture the RSX. So, it is not meaningful to take the power measurements of G70(at 130/110nm) and R520(manufactured at TSMC) and extrapolate for RSX.

That being said, 90nm process has a lot more problems and design issues than 130 nm. The low-k dielectric becomes a must. The second major issue is the leakage current* which is becoming worse with each process shrink. Most of IC companies are actually come up with different IC design strategies (or so called design rules) to combat with this issue. And yet, almost each 90nm design (except relatively slower low-power designs) still has an awful lot of heat problem.

The leakage current is much better contained however on SOI designs - which I guess we don't have confirmation of one way or the other here - but on which I recall related information with regard to one of Sony's fab investments (Nagasaki I believe) as indicating both Cell and RSX would utilize the same low-k and SOI process. I don't think TSMC --> Sony should be too big of a deal; Sony's been on 90nm for well over a year now and their emphasis has always been on advanced techniques with a focus on lowering power demands. And the leakage current aside, it should still generally be the case that everything else equal, power demands will drop on a lower node. Prescott stands out as a spectacular failure in this regard, but the architectural changes implemented more than anything else are to blame. The move to 65nm has shown some power/heat savings so even with leakage taken into account, we're not in the power abyss just yet.

It's true that TSMC 110nm to Sony 90nm will not be some sort of exact guesstimation exercise, but I think it's a safe bet that the architecture on that process will feature a higher emphasis on heat and power savings than would the chips coming out of TSMC, providing us at least with a ballpark range and the possibility to err on the side of caution. If indeed they are SOI, so much the moreso. Otherwise, well it is an assumption to say that Sony and TSMC will be able to produce chips of equivelent power consumption at 90nm with low-k, but one that is probably not *too* much of a stretch to make.
 
Last edited by a moderator:
Shifty Geezer said:
We can all see PS3 isn't a minimum spec, cheapest possible solution. I'm sure they could remove and downrate features from the current spec to save 20 bucks a unit, and make $20 billion extra over the console's life (assuming 100 million sales) but Sony haven't. The whole thing is a costs to benefits consideration, like any product, and the cheapest solution isn't always the one wanted. If Sony decide the improved performance is worth the extra cost per unit, for whatever reason factoring in also perhaps long term strategies, it might be something they consider doing.

That all sounds a little too altruistic to me.

Most of the things they've went with have been to stay competetive, they include 512MB of ram because to do otherwise would cost them market share, why not more? They went with the G70 derivitave because to adopt a more non-convential approach would've put them at a competetive disadvantage.

The only things Sony went "above and beyond" with also happen to be things that Sony has a vested interest in. CELL and BluRay are not in there to avoid the 'cheapest solution' they're in there because sony has a vested interest in these 2 technologies succeeding.

To me it's the equivalent of MS including a free copy of Office on X360, did they really go above and beyond, or are they just trying to make more money for themselves down the road?

I'd be much more willing to accept this notion if Sony did something with non-proprietary hardware like throw in 1GB of ram, or beef up the RSX to a 256bit bus or 512mb of memory. But to me it seems they've done the minimum they could get away with, while still pushing their BR and CELL technologies.
 
Scooby, I agree... Also, do not forget that Sony's main focus at E3 was not graphics at all. They knew that in terms of gfx capability and power, both RSX and Xenos are in the same ballpark. Therefore, whenever they give a interview, they are always bringing the CELL and BR to the front. They showed more real-time demos for Cell (physics simulation, CELL only gfx rendering, etc), and they talk more about CELL.. That's PS3 strength, so that's what they focus on.. And, CELL is one of the two components (other one is BR drive) that does not need any improvement at all at this point.
 
scooby_dooby said:
That all sounds a little too altruistic to me.

Most of the things they've went with have been to stay competetive, they include 512MB of ram because to do otherwise would cost them market share, why not more? They went with the G70 derivitave because to adopt a more non-convential approach would've put them at a competetive disadvantage.

Was the 512 MB of RAM to stay competetive though? I remember when the expectations for both Sony and MS were to have 256 MB of RAM, and then the talk started about MS perhaps including up to 512MB. We get to E3, and lo and behold in fact *both* systems have 512MB of RAM. With the artificial max on Cell of 256MB people felt that 512 would be a dagger to Sony - yet there they were with a non-UMA architecture, totally in contradiction of expectations, and with 512 total system RAM. Now no way Sony's PS3 architecture was dreamed up on the fly in response to the 360's 'sudden' spec increase. (Granted that assumes the initial talk of 256 MB of RAM in 360 were true)

To me this indicates not two companies that were trying to keep up with one another, but in fact two companies that indeed tried to double the others estimated RAM and came out in the same place.
The only things Sony went "above and beyond" with also happen to be things that Sony has a vested interest in. CELL and BluRay are not in there to avoid the 'cheapest solution' they're in there because sony has a vested interest in these 2 technologies succeeding.

To me it's the equivalent of MS including a free copy of Office on X360, did they really go above and beyond, or are they just trying to make more money for themselves down the road?

Well I don't disagree with you, but for reasons already enumerated, there really are some specs that might end up being fairly painless for Sony to increase in the end. And if that turns out to be the case, why wouldn't they do it? I would argue also that though Cell and blu-ray are both perhaps 'above and beyond' console-wise, they aren't necessarily above and beyond implementation-wise.

I'd be much more willing to accept this notion if Sony did something with non-proprietary hardware like throw in 1GB of ram, or beef up the RSX to a 256bit bus or 512mb of memory. But to me it seems they've done the minimum they could get away with, while still pushing their BR and CELL technologies.

Now that would be real altruism! ;)
 
Last edited by a moderator:
xbdestroya said:
I thought all the indications were that RSX would be both low-k and SOI, built on the same process as Cell itself. Isn't that what Sony's new Nagasaki lines are set up to handle? If you have any definitive evidence of this though, do present it, because then I have to make a change or two ASAP.

No. FWIW, Nagasaki's Fab2 and OTSS already contain CMOS4 lines; the SOI development is directed at Cell.
 
I love all these people who told me graphics dont matter on B3D..

But when it comes to their baby PS3, IT MUST HAVE TEH MAXIMUM POWAH!!!

That should tell anyone who doubts where their real hearts lie on the importance of graphics..

One thing, if the bandwidth concerns of PS3 are real, increasing the power of RSX may very well be like spinning their wheels. Another reason they may not do it.

I would still like a detailed analysis of whether the graphics bandwidth can effectively be split between the PS3's XDR and GDDR. I'm not convinced it can. But that's another thread. If it can then there's not much bandwidth concerns. If it cant then there is a lot of bandwidth concerns.

Also, be aware 512 GTX is extremely rare and expensive (699?), and doesn't seem very overclockable. In other words, it's really pushing it, it's not nearly a mass producable chip like RSX at this point. It has a cooler that covers the entire card and looks like a giant chunk of lead, complete with heatpipes for extra measure. So much of the benefits of 90nm will just go into making it mass producable.
 
Bill said:
I would still like a detailed analysis of whether the graphics bandwidth can effectively be split between the PS3's XDR and GDDR. I'm not convinced it can. But that's another thread. If it can then there's not much bandwidth concerns. If it cant then there is a lot of bandwidth concerns.
What are you talking about? PS3 is not a UMA like N64/Xbox/Xbox360 and PC games use split memory for ages.
 
I'm talking about..split internal bandwidth.

The bandwidth all to the GPU is like 22 GB or whatever.

Can it go into XDR for more? If so, dont you have to split the rendering engine or something? You have two physically seperate pools of 256 RAM. Will the latency difference be seamlessly overcomable? Is that even possible?

I'm not talking about CPU>GPU bandwidth. That's like what would go over PCI Express or AGP in a PC. It would be the 22 GB from Cell. But can the GPU use that like internal bandwidth?

I guess to my mind, PS3 is like a PC with 256 of system RAM, with a graphics card with 256 of video RAM.

Here is a crude diagram:

PC 7800 card>>38 GB

RSX>>22 GB>256GDDR
+>>22 GB >256 XDR

Total RSX has 44 GB. But into TWO smaller pools of RAM and introducing TWO differing latency levels, not one.
 
Last edited by a moderator:
Vince said:
No. FWIW, Nagasaki's Fab2 and OTSS already contain CMOS4 lines; the SOI development is directed at Cell.

Vince you have a certain manner about you - did you know that? ;)

Anyway I'm not arguing for/against SOI for Cell - that's widely known - I was simply stating that I recalled a statement made somewhere that both RSX and Cell would share the same process. Clearly you should know I'm well aware of Nagasaki and OTSS' existing CMOS4 lines.

But I guess you're right and I remembered this factoid incorrectly. With the Nagasaki SOI-line upgrades and RSX's slated production at Nagasaki I guess I assumed it would share the process (in conjunction with seeming article synthesis in my mind), but if it's to be fabbed at OTSS also, then there's really no way it could be designed as an SOI part.

That's too bad because there go some of the power/heat savings RSX was enjoying in my mind (and now I'm going to have to edit), but the basic concept of max consistent MHz on a certain voltage remains the same.
 
Last edited by a moderator:
Fafalada said:
Nah, I bet they increased to 512MB after the OS writers told them they needed to reserve 384MB for kernel, so 256MB couldn't even run.

LOL, now you're just trying to scare us! :p
 
Phil said:
We don't know the heat and power RSX generates - we also don't know under which constraints the RSX will be within the PS3 case. If the case and the airflow/cooling measures allow for a higher heat/power consumption than the initial 550MHz RSX would have allowed, I see no reason why this would cost Sony more. As a matter of fact, it makes sense to get the most out of your design and if that means higher clockrates without changing anything significant, I see all the reason they would.


:!:
Great thread btw Xbd. A shame people like jvd is on the best way of getting it closed and reducing the chance of civil / constructive discussion. :???:

So i guess your personal attack on me is allowed to stay .

Great modding on this site .
 
Bill said:
One thing, if the bandwidth concerns of PS3 are real, increasing the power of RSX may very well be like spinning their wheels. Another reason they may not do it.
Higher RSX clock with the same bandwith would mean one can run more complex shaders (more arithmetic instructions) on every given pixel, this can't be bad :)
Obviously having more mem bandwith would be even better!
 
Back
Top