RSX Secrets

Status
Not open for further replies.
But you can't expect that adding more cache will increase perfomance in a linear scale. I think there was some talk about the extra cache here at B3D some time ago, maybe the search function will add some more depth to it, try "96kb" without quotes to use 4 letters.

And I highly doubt that perfomance increase G71 shows over G70 is all alone becouse of more cache, if anything it would be a small part of it.
 
Last edited by a moderator:
Maybe yes maybe not if wee talk about pixel shader and texel rate performace overall,we have some more increase performance like we see in this benchs:

more here: http://www.digit-life.com/articles2/video/g71-part2.html#p5

http://www.hexus.net/content/item.php?item=4872&page=9


-> This extra cache using for this? Or RSX have some other cache exclusively for hide extra latencie for acess XDRAM (textures etc)?

But when i talk about 10% more im compare based in link above performance gain G70 vs G71 and maybe RSX have some goal,despite videogame is closed box etc.

All those tests are G71 with a 100Mhz clock speed advantage though. Take that into account when looking at the game benchamrks and the difference is negligable. I'll admit, the synthetic tests do show up some advantages of G71 over G70 that go beyond the clock speed difference but as Nebula said, its unlikely to be down to cache alone and of it were, and adding yet more cache would improve performance another 10%, why wouldn't NV have done so? Certainly the transistor hit would have been easily achievable.
 
All those tests are G71 with a 100Mhz clock speed advantage though.

No this links count at same clock and look at this bench compare Geforce 7800GTX 512 (550Mhz core clock) and Geforce 7900GT (450MHz core):

bench-texefficiency.png


Take that into account when looking at the game benchamrks and the difference is negligable. I'll admit, the synthetic tests do show up some advantages of G71 over G70 that go beyond the clock speed difference but as Nebula said, its unlikely to be down to cache alone and of it were, and adding yet more cache would improve performance another 10%, why wouldn't NV have done so? Certainly the transistor hit would have been easily achievable.[/QUOTE]

Yes agreed,i see some benchs G70 ahead g71 but at pixel shaders and texel rate sustained never surpass look again: http://www.hexus.net/content/item.php?item=4872&page=9

And here:

http://www.hexus.net/content/item.php?item=4872&page=13

" Performance Summary
Compared to G70, G71 offers improvements in performance in a number of areas at the same clock rates. FP16 blend performance is up usefully, as is texture rate, and texture latency is down. Compared to NV43, G73 is an even bigger step forward. Bundled in with the improvements G71 sports, G73 gets beefed up FP ALUs, tweaked VS hardware, and the chip is usefully wider from VS to ROP."

" On the 7900 side, 7900 GT(core 450MHz/42.24GB/sec bandwidth) has the potential to be the equal of the hilariously expensive and hard to obtain GeForce 7800 GTX 512(core 550MHz/54.4GB/sec bandwidth), and it largely will match X1800 XT, while 7900 GTX almost has the measure of ATI's Radeon X1900 XT and XTX."

Maybe have some increase in G71 over G70 in performance ROP too,but how to explain diference the in more Texel rate and pixel shaders in G71 over G70?
 
No this links count at same clock and look at this bench compare Geforce 7800GTX 512 (550Mhz core clock) and Geforce 7900GT (450MHz core):

bench-texefficiency.png


Take that into account when looking at the game benchamrks and the difference is negligable. I'll admit, the synthetic tests do show up some advantages of G71 over G70 that go beyond the clock speed difference but as Nebula said, its unlikely to be down to cache alone and of it were, and adding yet more cache would improve performance another 10%, why wouldn't NV have done so? Certainly the transistor hit would have been easily achievable.

Yes agreed,i see some benchs G70 ahead g71 but at pixel shaders and texel rate sustained never surpass look again: http://www.hexus.net/content/item.php?item=4872&page=9

And here:

http://www.hexus.net/content/item.php?item=4872&page=13

Maybe have some increase in G71 over G70 in performance ROP too,but how to explain diference the in more Texel rate and pixel shaders in G71 over G70?

The superior FP16 blending is clearly down to improvements in the ROPs, the article even says so. The pixel shader boost from the link in your previous post is likely for the most part attributable to the clock speed increase. There do indeed appear to be some small texturing improvements aswell but that can't necessarily be attrubuted purely to the size of the texture cache.
 
The superior FP16 blending is clearly down to improvements in the ROPs, the article even says so. The pixel shader boost from the link in your previous post is likely for the most part attributable to the clock speed increase. There do indeed appear to be some small texturing improvements aswell but that can't necessarily be attrubuted purely to the size of the texture cache.


Yea agreed and in my opinion some benchmarks "floating" to much,maybe this extra per quade cache from G71(64KB) vs G70(48KB) apear in circunstances or in processing like this :

bench-texfetch256.png


" As texture size increases, R520 and R580 hang on to the cached performance better, NVIDIA's chips falling back. G71 is faster than G70 at the same clocks, regardless of memory bandwidth available."

So why Nvidia launched G71 if its not increase substantially performance? Only more clock rates (thanx to 302 millions transistor in G70 to 278 mill in G71)?

Maybe some tweaks in G71 like FP16 bending,extra cache per quad,less texture fetch latencies etc in addition give a performance in game not settled in benchmark and not visible in games at this time(2005/2006) but we could see some parallell in RSX today since we have games like Heavenly Swords,Ratchet & Clank and Uncharted Drake not rivaled with "pc relatives counterparts" at similar specs/clocks etc.

(thanx a lot for patience to understand my bad English)
 
So why Nvidia launched G71 if its not increase substantially performance? Only more clock rates (thanx to 302 millions transistor in G70 to 278 mill in G71)?

The main reason for G71 was the smaller process meaning a cheaper to produce, lower power chip that could run at higher clock speeds.

In other words it allowed higher availability at a lower cost than the GTX512 while maintaining similar performance.

Maybe some tweaks in G71 like FP16 bending,extra cache per quad,less texture fetch latencies etc in addition give a performance in game not settled in benchmark and not visible in games at this time(2005/2006) but we could see some parallell in RSX today since we have games like Heavenly Swords,Ratchet & Clank and Uncharted Drake not rivaled with "pc relatives counterparts" at similar specs/clocks etc.

You have a point there. I expect many PS3 games (and modern PC games) use FP16 blending. G71 should show a marked improvement over G70 in those games. That would indeed give RSX an advantage over say the GTX512. Whether thats enough to overcome the clockspeed/ROP/memory bandwidth deficit is another question. And of course that advantage wouldn't translate over when comparing to G71 in the PC.
 
Maybe some tweaks in G71 like FP16 bending,extra cache per quad,less texture fetch latencies etc in addition give a performance in game not settled in benchmark and not visible in games at this time(2005/2006) but we could see some parallell in RSX today since we have games like Heavenly Swords,Ratchet & Clank and Uncharted Drake not rivaled with "pc relatives counterparts" at similar specs/clocks etc.
(thanx a lot for patience to understand my bad English)

Could be discussed since one would have to take into account what is done technically (and seeing and tested games I would say it has without doubt). And having access to a 7900GT (500/700MHz 512MB) I have tested several multiplatform games where it has provided better IQ and better frames than the other versions. Though it is off topic.

So far atleast... well see how the future turns out keeping things equal.
 
Last edited by a moderator:
Why would a devkit be any different from a PS3 GPU/RAM MHz/CPU wise at this times beyond final devkit release?

Easily. Devkit may lower clock speed CPU/GPU around 5-15% depend on any platforms or can be same spec with real console. These cause if you run some code work on lower clock around 5-15 % it's will perform better on real console. By the way , if the code run suck you'll need to debug it before. The debugger tools running lower 5-15% clock speed of real console doesn't affect your debuging performance anyway.
 
You are saying the devkit lowers it MHz for different components without the developers knowldege? :???:

Or that they know about it... but what would make you think that the devs don't want the full processing power available to them to have an exact emasurement of how the game wil lrun in final form on the clients machine?

Well this is taking some strange turns, better PM some devs here for more info becouse I feel we soon will be talking about the moon landing conspiracy theory! :smile:
 
Or that they know about it... but what would make you think that the devs don't want the full processing power available to them to have an exact emasurement of how the game wil lrun in final form on the clients machine?

I think every devs need to maximize console power by code tricks , pattern , memory footprint design ,...,etc. However for console makers they always need all games perform the best on their platform. Decrese 5-15% clockrate of CPU/GPU for devkit isn't bad idea to achieve their target. At least it'll make them sure that anygames that work on their devkits always perform better on real consoles.
 
Easily. Devkit may lower clock speed CPU/GPU around 5-15% depend on any platforms or can be same spec with real console. These cause if you run some code work on lower clock around 5-15 % it's will perform better on real console.

There is absolutely no advantage from lowering the clockspeed of the devkit.
Most devkits are actually more powerful than the real thing, PS3 devkits had 1gb Ram in 2005.
 
I think every devs need to maximize console power by code tricks , pattern , memory footprint design ,...,etc. However for console makers they always need all games perform the best on their platform. Decrese 5-15% clockrate of CPU/GPU for devkit isn't bad idea to achieve their target. At least it'll make them sure that anygames that work on their devkits always perform better on real consoles.

Coming from someone with a devkit sat not too far away, I think it's safe to tell you that you that quite a few of your assumptions regarding PS3 specifics & devkit hardware are clearly incorrect..

Unless you have hardline knowledge of the facts then it really isn't worth coming to a board frequented by devs & spouting off about things you clearly are not the authority on..

akumajou said:
Just because some people who post on online forum boards happen to be game developers, it does not make them the official word on any game console's hardware specs, specially considering that they are game developers, not hardware engineers.
I find it extremely hard to believe someone like you akuma would come out with a statement like that..

Do you really believe someone can develop & optimise software to a fixed hardware specification without knowing in truth exactly what that specification is..?

:???:
 
PS3 devkits had 1gb Ram in 2005

2005 Devkit consist with PowerMac G5 @ 2.4GHz + 1GB DDR RAM + GeForce 6800 Ultra SLI man. It's not Cell BE @ 3.2GHz and RSX @ 550MHz anymore.
 
I have to ask of some people here, why are you in this forum? The board exists to discuss ideas and learn things, right? But some people are just ignoring the wealth of knowledge and experience to hang onto their own single-minded interpretation. "It doesn't make sense to me, ergo everyone else in the world is wrong and I won't listen to them." The RSX clockspeed discussion was had. It's over. Like I said.
Anyone who's actually interested, I tried to find the relevant the threads. Start reading. I think the discussion began and ended here. Read the thread and stop wasting our time.
 
I find it extremely hard to believe someone like you akuma would come out with a statement like that..

Do you really believe someone can develop & optimise software to a fixed hardware specification without knowing in truth exactly what that specification is..?

:???:

So you find it extremely hard to believe that I would make a statement like that, why is that?

Sony released OFFICIAL word on Playstation 3 specs in E3 2005, no surprise.

"game developers" in online forums boards are saying that the specs changed yet no one comes forward and takes responsibility for their word.

Therefore the OFFICIAL word remains with Sony because "game developer" will have to take responsibility and the media will publish this as a news commentary on the allegation.

Sony will then have to make an response on the allegation so the final word remains with Sony reguardless if you believe or I believe.

The fact that you have to search an online forum to look up and believe that the spec was changed is what makes believing in this "rumor" disturbing.

Then again I am not surprised, I meant no disrespect to game developers on this forum but what I am saying is correct, if someone feels better deleting my post to protect an irresponsible comment or rumor then more power to you.
 
Sony doesn't have to do anything because they don't print the specs on the box and the people who need to know the specs sign NDAs.
 

It's as simple as this: developers are under NDA preventing the confirmation of a clockspeed shift downwards in the RSX. But they are not under NDA to re-affirm the 'official' specs, as these are quite public. Yet not one developer here or anywhere else on the web over the course of several years has done so - doesn't this clue the "believers" off in any way? I haven't followed a lot of the recent posts in console tech related to the PS3, PS4, or RSX because they are simply too much to stomach. Obviously myself and others vouching as proxy for the fact that RSX has been downclocked isn't enough for some folk; I can understand that, I think skepticism is healthy. I'll simply point to the lack of any word to the contrary - official or otherwise - as all the answer any reasonable person would need, two years into the launch. No dev or Sony employee would be castigated for re-affirming the 550 Mhz clockspeed.

And beyond even the discussion itself, who can care so much about that 50 Hz anyway that we keep these (multiple) threads on the topic alive?
 
2005 Devkit consist with PowerMac G5 @ 2.4GHz + 1GB DDR RAM + GeForce 6800 Ultra SLI man. It's not Cell BE @ 3.2GHz and RSX @ 550MHz anymore.

You've just conflated an early version of the 360 devkit (PowerMac G5) and an early version of the PS3 devkit (6800 SLI).
 
However for console makers they always need all games perform the best on their platform. Decrese 5-15% clockrate of CPU/GPU for devkit isn't bad idea to achieve their target. At least it'll make them sure that anygames that work on their devkits always perform better on real consoles.

Problem here is that developers run lots of extra debug code (asserts, memory leak detectors, buffer overflow checkers, various data structure sanity checks, etc) during the development cycle. On top of that the code is often compiled with reduced (size and performance) optimization level to help the debugging (optimizer removes lots of variables). All these debug support features make the developed game use considerably more CPU and memory than the retail game would use. It's only logical to make the devkits a bit more powerful and especially have more memory than the retail products. Otherwise software debugging would be impossible, and it's the main reason to have a devkit in the first place. Of course you can run most devkits in a hardware compatiblity mode to make it as close to the real thing as possible (for final memory and performance testing).
 
Status
Not open for further replies.
Back
Top