GeForce 7800 GTX Information: Implications for RSX

gmoran

Newcomer
http://www.theregister.co.uk/2005/06/01/nvidia_geforce_7800_gtx/

GeForce 7800 GTX: 24 pixel pipelines fed by eight vertex pipelines, allowing the chip to churn out 860m vertices every second and colour 10.32bn pixels in the same space of time; will feature the latest versions of Nvidia's CineFX and Intellisample engines, with DirectX 9.0 Shader Model 3.0 support and Open GL 2.0 handled too; offers 64-bit floating point texture filtering and blending, as per the 6800 series.

So from this can we infer for RSX: 1.1 GVerts/sec; 13 GPixels/sec; and that 128-bit HDR is an RSX modification? Any other thoughts? And apologies in advance for any mistakes or false assumptions on my part, 3D hardware ain't my thing.
 
gmoran said:
http://www.theregister.co.uk/2005/06/01/nvidia_geforce_7800_gtx/


So from this can we infer for RSX: 1.1 GVerts/sec; 13 GPixels/sec; and that 128-bit HDR is an RSX modification?

Yes if RSX is a 90nm G70 that should be right at 550MHZ.
HDR at 128bit is present only on RSX,G70 should have 64 bit precision.
I also wonder what improvemnts will be made in CineFX and Intellisample and if RSX will have other exclusive features ( HDR 128bit apart).
 
gmoran said:
So from this can we infer for RSX: 1.1 GVerts/sec; 13 GPixels/sec
You can't extrapolate fillrate using core clock cause ROPs are synchronous to mem clock.
G70 should have 16 ROPs (and 24 fragment pipelines), so from the leaked fillrate we have 10.32 Gpixels/s / 16 pixels per clock = 650 Mhz for memory clock.
RSX has 700 mhz memory so its fillrate would be improved by a little margin -> 11.2 Gpixels/s
Unfurtunately RSX has also a 128 bit bus to GDDR3 memory (G70 has a 256 bits bus), thus it's not clear if RSX can achieve that kind of fillrate without splitting back buffer and zbuffer over multiple buses (vram bus and FlexIO interface)

and that 128-bit HDR is an RSX modification? Any other thoughts? And apologies in advance for any mistakes or false assumptions on my part, 3D hardware ain't my thing.
128 bit HDR as back buffer format is a complete waste, neither big studios such as ILM use it, AFAIK
Moreover a 128 bits per pixels buffer would eat a huge amount of bandwith and RSX seems to be already a bit costrainted on the memory bandwith side.
In fact I can't understand why nvidia pushed so much 128 HDR rendering at Sony E3 Conference, maybe they were just talking about 128 bit textures (filtering, blending..) , not 128 bits render targets.
Or maybe they have developed a new color buffer compression scheme that works well with 128 bits buffers and can make them useable, but why someone should work with them when 64 bits buffers are enough most of the time I don't know..
THere could be a third case...they want to make sure SPEs can work on stuff RSX has rendered, and I bet SPEs haven't specific instructions to convert a 64bits vec4 to a 128bits vec4 and viceversa, so Nvidia has made possible to output 128bits pixels in order to make SPEs life a lot better.
 
In fact I can't understand why nvidia pushed so much 128 HDR rendering at Sony E3 Conference, maybe they were just talking about 128 bit textures (filtering, blending..) , not 128 bits render targets

because it sounds more impressive. It most likely wont be used at all and I really think its part of the g70 refresh
 
jvd said:
In fact I can't understand why nvidia pushed so much 128 HDR rendering at Sony E3 Conference, maybe they were just talking about 128 bit textures (filtering, blending..) , not 128 bits render targets

because it sounds more impressive. It most likely wont be used at all and I really think its part of the g70 refresh

More likely because it was something they were fairly certain ATI didn't support.
 
ERP said:
jvd said:
In fact I can't understand why nvidia pushed so much 128 HDR rendering at Sony E3 Conference, maybe they were just talking about 128 bit textures (filtering, blending..) , not 128 bits render targets

because it sounds more impressive. It most likely wont be used at all and I really think its part of the g70 refresh

More likely because it was something they were fairly certain ATI didn't support.

So reading between the lines, is it likely to be used in-game?
 
Jaws said:
So reading between the lines, is it likely to be used in-game?
If you're going to output your stuff in external dram it's likely a 128 bit format will not be used.
 
nAo said:
Jaws said:
So reading between the lines, is it likely to be used in-game?
If you're going to output your stuff in external dram it's likely a 128 bit format will not be used.

I really can't see a reason you would want to use 128 bit output.
 
ERP said:
I really can't see a reason you would want to use 128 bit output.
a remote hypothesis: I want to directly render stuff into SPEs local store or PPE L2 cache.
Once there SPEs could work on 128 bits data without the need to handle a format conversion (from 16bit fp to 32bit fp)
I doubt SPEs have a native support for 16bit fp numbers, even if it should be possible to convert a 16bit fp number to a 32bit fp numer with some integer math ;)
 
nAo said:
ERP said:
I really can't see a reason you would want to use 128 bit output.
a remote hypothesis: I want to directly render stuff into SPEs local store or PPE L2 cache.
Once there SPEs could work on 128 bits data without the need to handle a format conversion (from 16bit fp to 32bit fp)
I doubt SPEs have a native support for 16bit fp numbers, even if it should be possible to convert a 16bit fp number to a 32bit fp numer with some integer math ;)

I would have thought that even if this were the cxase the overhead of rendering in 128bit would outway the cost of the SPE converting the format especially since you've got 7 of the things.
 
Hmm - maybe the 128 bit HDR just means that all calculations are done at 32 bit FP precision? Also, apart from "HDR"... is there no use for (small) 128bit render targets intended to contain vertices or vertex attributes?
 
ERP said:
I would have thought that even if this were the cxase the overhead of rendering in 128bit would outway the cost of the SPE converting the format especially since you've got 7 of the things.
I don't see much overhead, internally pixels are already processed as 128bits entities.
I don't think RSX supports blending on 128 bits render targets, so outputting (in multiple clocks) 128 bits pixels shouldn't need much hw support at all (small changes in ROPs), there's a lot of bandwith between RSX and CELL, we should find some use for it :)
 
nAo said:
ERP said:
I would have thought that even if this were the cxase the overhead of rendering in 128bit would outway the cost of the SPE converting the format especially since you've got 7 of the things.
I don't see much overhead, internally pixels are already processed as 128bits entities.
I don't think RSX supports blending on 128 bits render targets, so outputting (in multiple clocks) 128 bits pixels shouldn't need much hw support at all (small changes in ROPs), there's a lot of bandwith between RSX and CELL, we should find some use for it :)

Actually my assumption is that RSX does support 128bit blends. Something i have been told requires a HUGE transistor investment.

All the cost that matters in this case is in external bandwidth 128bits = 4x bandwidth. I guess if you have REALLY complex shaders then it might not be an issue, but I don't see it.
 
ERP said:
Actually my assumption is that RSX does support 128bit blends. Something i have been told requires a HUGE transistor investment.
Who told you that is right, it would have a huge cost.
A cost that doesn't seem a good trade off to me, what does 128 bits blending buy to us?
Well..if RSX ROPs can blend on 128bits surface maybe they halved ROPs number, from 16 to 8...
 
Let me float my theory again... What do you guys think of using the same functional units to do color blends (ROP ops) and texture filtering (then load balancing between the two)? Seems like one should be able to save quite a few FP32 macs this way...
 
Well looks like in 3 weeks well see what 7800 and possiblely RSX will look like...gamespot

Nvidia unveiling next-gen GPU June 21

Graphics manufacturer to present its new chip and give live demonstrations at San Francisco launch event in three weeks.

Nvidia has revealed that it will announce its next-generation GPU on June 21 at a launch event in San Francisco. An invitation sent to Nvidia partners, analysts, and members of the press promises invitees an "evening of full-throttle entertainment" filled with "revolutionary technologies, hot 3D content, and exciting gaming action as we unveil our next-generation GPU."
Festivities will open with cocktails and a case-mod contest before the start of the scheduled press conference with live product demonstrations. The evening will conclude with a reception including hands-on product demonstrations and a Battlefield 2 LAN Party. If Nvidia's next-gen GPU launch plans sound familiar, it might be because Nvidia announced its current generation GeForce 6 series in a similar, but more public fashion, at a GeForce LAN Party in San Francisco last year.

Nvidia has indicated in the past that its next-generation desktop GPU will be similar to the GPU designed for the Sony PlayStation 3.
 
This is surprising to me, because of all the noise that RSX isn't finished and they only had SLI GF6s in the dev kits. Could it be, that they had G70s in the dev kits?

I'm assuming G70=some chips available and RSX=basically none available yet.
 
Back
Top