RSX - Best guess at what 300 million trannies are for

Status
Not open for further replies.

Shifty Geezer

uber-Troll!
Moderator
Legend
Okay, on a 300 million transistor RSX chip, are all those transistors going to be Vector/Pixel pipes-fair, or is there room for something surprising? Any chance of some dramatic and clever addition to graphics rendering, or is it just more of the same?
 
Shifty Geezer said:
Okay, on a 300 million transistor RSX chip, are all those transistors going to be Vector/Pixel pipes-fair, or is there room for something surprising? Any chance of some dramatic and clever addition to graphics rendering, or is it just more of the same?

I wonder how many of those transistors go into the cache SRAM and logic that the GPU must have. Probably 16-32 million for 0.5-1MB of SRAM and a smidgen for the cache logic. That still leaves a lot of transistors...
 
Well to start with, their old NV45 design already had 222 million transistors. Add to that 128 bit color precision and more shader execution logic, and it's easy to see how they wound up with 300 million transistors.
 
Tacitblue said:
Well for sure the 128 bit pixel precision for lighting adds something. That demo impressed me.

Watching the sony conference, nvidia seemed to make a pretty big deal out of this. I thought previous cards already had 128-bit internal precision? Or is this something else?
 
Titanio said:
Tacitblue said:
Well for sure the 128 bit pixel precision for lighting adds something. That demo impressed me.

Watching the sony conference, nvidia seemed to make a pretty big deal out of this. I thought previous cards already had 128-bit internal precision? Or is this something else?

True, current Dirext 9.x already supports 32-bits floating point (FP32) per component, so that's 128-bits (including alpha) per pixel...

Welcome to the hype (TM)...
 
In the conference they showed the difference the percision in this chip made in HDR though. Basically you had the uber HDR on one hand and the nasty bleeding HDR we are seeing current gen in the other. Whatever they did it looked HOT HOT HOT.

As for the 78M transisters, if the rumor that it is a 24/10 chip is true, that is 8PS units and 4VS shader units. Throw in the extra 128bit pixel percision and I think it is pretty easy to see those 78M transisters go bye bye.

That is the shocking part of the R500 150M transister rumor. Since RSX looks like a PC part MAYBE the video encoding/decoding was left in (what a waste if so with CELL), but who knows. But an R500 with LESS transisters than the R420 is surprising, especially with NV making a 35% jump from their HIGH END GPU. But maybe part of that is the differnce between a custom chip and taking an off the shelf PC part.
 
Acert93 said:
In the conference they showed the difference the percision in this chip made in HDR though. Basically you had the uber HDR on one hand and the nasty bleeding HDR we are seeing current gen in the other. Whatever they did it looked HOT HOT HOT.

You mean the picture comparisons? That was just a non-HDR-32bit-rendering compared to a HDR-128bit-rendering.

I think the best way to see the difference is in the videos, HDR really makes things look so much more realistic and non-videogame-like it's unbelievable, i never thought much of it before seeing these presentations, then it hit me.
 
True, current Dirext 9.x already supports 32-bits floating point (FP32) per component, so that's 128-bits (including alpha) per pixel...

Welcome to the hype (TM)...
I think what they meant was FP32 blending that's the impression that I get, perhaps not though.
 
I really hope will get some information on this soon... :?


BTW; Acert93, PLEASE, it's a transistor, not transister. It's driving me crazy. :D
 
Phil said:
BTW; Acert93, PLEASE, it's a transistor, not transister. It's driving me crazy. :D

o_O Darn that Hooked-On-Ebonics!! My 2nd grade spelling teacher is rolling in her grave.

Seriously THANKS. I kept alternating every couple days... and now I know.
 
Noooooooooooooooooooooo!!!!!!!!!!! :devilish:

Is that clear enough for you. Hey at least you didn't link to an actual pick last time since I fell for it. I should have known better.
 
london-boy said:
I think the best way to see the difference is in the videos, HDR really makes things look so much more realistic and non-videogame-like it's unbelievable, i never thought much of it before seeing these presentations, then it hit me.

I watched the Sony conference and was amazed at how realistic Fight Night looked and then saw DOA4 at the microsoft conference and thought "this looks like a game".

The difference is huge.
 
but Xbox360's total graphics subsystem is more like 240 M transistors
(150M + 90M) so it's not a 2x difference in T-count. the RSX still has conciderably more logic though.
 
Megadrive1988 said:
but Xbox360's total graphics subsystem is more like 240 M transistors
(150M + 90M) so it's not a 2x difference in T-count. the RSX still has conciderably more logic though.

Yes... a lot of questions left. Like: Did Nvidia take the video coding features out of the RSX? What effect will Unified Shaders have? Are they more or less effecient per tranistor?

All things even, i.e. both chips being equal size, we could discuss whether the use of eDRAM is a good choice for tradeoffs. But even with eDRAM there is a 60M transistor difference. 150M in logic seems pretty large, even if we do take into consideration that Nvidia chips tend to be a little bigger. Unless they left a lot of useless stuff in the chip due to time constraints (where are all those people argueing that this chip has been in the works for 24mo?) it would seem a given the RSX will be more powerful. Unless they spent a lot of realestate on features they cannot realistically use (e.g. 128bit percision). Obviously I expect each GPU to have features the others does not, and without knowing much more about either chips architecture, specifial features, or any real benchmarks it is hard to say anything.

On paper the R500 sounds very powerful based on the leak (which has been almost dead accurate), a good 2x powerful or more in shader performance compared to current top end GPUs Simiarly the RSX is sounding like 2.5x as poweful. But it really is the features... the HDR effects Nvidia showed made their CGIs look out of this world.

In that regards I would rather have a slightly slower GPU with killer features than just a faster one. It will be very interesting to see how this plays out.
 
Status
Not open for further replies.
Back
Top