An 8600gts RSX instead of a 7800 gtx

MBDF

Newcomer
I'm curious... how would a 8600gts (32 shaders at 1500 mhz, 128 bit 32MB bandwidth) have compared to the RSX we have today... I have a hunch it would have been slightly less powerfull but more developer friendly... am I wrong in assuming that?
 
Don't know real world numbers, but in theory the 8600gts have less than half the Gflops... It might be slightly more developer friendly, but swapping the rsx for something slower but more dev friendly would have been a horrible decision...

The PS3s problem is not that it's GPU is not developer friendly enough, not by a billion miles, thats the very very least of the ps3s problems....
 
The only thing this thread will accomplish is to make me cry because of what could have been...

(if only Sony wanted to release a $800 console :LOL:)

actually, over here in europe, they did!!
 
Don't know real world numbers, but in theory the 8600gts have less than half the Gflops... It might be slightly more developer friendly, but swapping the rsx for something slower but more dev friendly would have been a horrible decision...
But take a look at the Xbox 360. A far less powerful CPU than the Cell, but vastly more developer friendly.

Hardware is useless if you can't write software for it. That said, sacrificing half your performance to make it a little friendlier seems a bit much.
 
It may have been overall better. The 8600gtx I believe is smaller than the 7800s . Its also more efficent with its shaders. It most likely would have lost alot of fill rate but then again we are targeting 720p and sometimes 1080p so you don't need oodles of fillrate.

i think the main failure of the ps3 is the lack of ram compared to everything else in the system. They gave us this great storage capacity and then saddled us with 512 megs of ram with originaly up to 64MBs of it taken up for the os (now down to the 40s i believe). Just by giving us 256 megs with the cell and 512 megs with the gpu we would easily see a large graphical diffrence between the 360 and the ps3


But take a look at the Xbox 360. A far less powerful CPU than the Cell, but vastly more developer friendly.

Hardware is useless if you can't write software for it. That said, sacrificing half your performance to make it a little friendlier seems a bit much.

They could have gone with a smaller cheaper cpu and use the silcon budget on a larger gpu. A modified 8800gtx would have done wonders for the ps3.

Of course I still believe another 256megs of ram would have been the best bang for the buck. use a smaller cpu and add in the extra ram and things would have changed from day one
 
I have a hunch it would have been slightly less powerful

In terms of shading throughput, yes. Put one way, theoretical max is arguably "less of a lie" for a fixed platform than on PC.

but more developer friendly...

Depends on your point of view and how low-level you go. I would *imagine* there are peculiarities due to certain engineering decisions that exist not normally seen by higher-level coding - the point being that there were plenty of optimizations that occurred between the two generations.

But I digress...

A modified 8800gtx would have done wonders for the ps3.

And clearly, a G80-derived chip would not have been ready in time for mass production. The PS3 was supply constrained enough at the end of 2006 :!: But that's not the point of the original poster...
 
was it constrained because of the gpu or the bluray drive.

That is the question.

My point was, with the PS3 already supply constrained, why bother making the problem worse with a GPU that wasn't as easily mass manufactured - you're talking about a lol-sized monolithic GPU compared to the G71-ish sized die.

G80 - 420mm^2
G71 - 196mm^2

That is also ignoring the much much higher power requirements for G80 as well.
 
Don't know real world numbers, but in theory the 8600gts have less than half the Gflops... It might be slightly more developer friendly, but swapping the rsx for something slower but more dev friendly would have been a horrible decision...

The 8600GTS actually has 60% the raw overall shader power of RSX. However because of the unified and scalar design its quite a bit more efficient.

G71 also uses some of that shading power for texture addressing so overall i don't think the difference between the two in terms of pixel shader capability would be that huge.

The GTS would obviously win when it comes to heavy vertex shading loads as well as geometry setup and pixel fill rate limited situations (especially with 4xMSAA).

It has a little less theoretical texturing capability (90% the raw power of RSX) but over 50% more bandwidth directly to the GPU. However taking Cells bandwidth into account as well it could be roughly a match there.

Add in the additional capabilities of of G8x over G7x (DX10, FP10 support, HDR+MSAA etc..) and I think the 8600 GTS could have been just as effective in the PS3 as RSX is, if not more so.

Thats ignoring the cost and timing implications of course.
 
The 8600GTS actually has 60% the raw overall shader power of RSX. However because of the unified and scalar design its quite a bit more efficient.

It seems to me consoles are the place where maximum efficiency is going to be derived. I'd guess RSX greater shader power (which 8600GTS could not match) is close to tapped in PS3.

In other words 8600GTS greater efficiency is probably much more of a benefit on PC, where hardware adapts to the software and not vice versa. In PS3, whatever GPU is in place will be mostly fully utilized. So one would think the edge goes to RSX and it's greater raw specs.

What might have been more interesting is say, a "tweener" 9600GT type chip (64 shaders) in PS3. We know 8800GTX was too large, but when you say "custom G80" I'm envisioning something halfway between 8600GTS and 8800GTX might have made a lot of sense for PS3. Of course it would have had to have been planned out well in advance.

Edit: I've done some looking around at 9600GT die size and it appears it would have been too large. According to this website it's 225 mm^2 @ 65nm. Also 505 million transistors (versus RSX alleged ~300m). Would have been too beefy for 90nm PS3 apparantly.
 
Last edited by a moderator:
8600s are horrible, horrible GPUs. We bought ten of them for the office a year ago, and after the initial enthusiasm for the "new generation" people suddenly were very reluctant to "upgrade" from their 7800s and 7900s. And we don't have big monitors, so most of us run our games in resolutions similar to the consoles' 720p.

An 8600 has 32 scalar units, which (if you ignore efficiency gains) are roughly equal to 8 float4 units, across vertex and pixel shading. Compare this to a 6600/7600 which have 8 float4 pixel units AND 3 float4 vertex units, and you see why an 8600 needs all of its increased clock speed and efficiency gains not to be embarrassed by a previous generation GPU.
 
In other words 8600GTS greater efficiency is probably much more of a benefit on PC, where hardware adapts to the software and not vice versa. In PS3, whatever GPU is in place will be mostly fully utilized. So one would think the edge goes to RSX and it's greater raw specs.

Raw hardware specs are always a good thing, but unfied shaders provide considerably higher efficiency also on closed platforms.

In a single frame rendering, there are a lot of steps that benefit greatly from unified shaders:
- Shadow map rendering: Heavily vertex shader bound (no pixel shader at all)
- Post process effects (motion blur, hdr/bloom, depth of field, ambient occlusion, etc): Heavily pixel shader bound (no vertex shader at all)

On a non-unified architecture, lots of shader performance is wasted every frame on shadow map rendering and on post process rendering steps. Either pixel shaders of vertex shaders are just idling during that time. Real time shadows and post process effects are much more important features now compared to the last console generation. Most games use around 25-50% of their frame time to calculate these effects.
 
It does seem like 360 is the more effective game hardware. Oh gosh this statement will probably cause angst.... But really, there have been more than a few ports that worked out better visually on 360 and it does seem like the GPU would be the culprit here. Either that or the split vs. unified RAM setup.

G7x has issues with things too, such as texture filtering perf/quality and AF performance/quality. These are aspects that I would be surprised to find ATI Xenos having issues with considering how things were with R5x0/R4x0 vs G7x/NV4x. But then again, Forza 2 had bilinear filtering and low AF (or nonexistent AF) so who really knows.

8600 is rather ucky aside from a pure ALU utilization efficiency perspective. However, you also need to consider that it was on 80nm and was still rather large. I think it's as big as G71. So while it may be more efficient from the usage of ALUs, it's not more efficient from a transistor perspective. 7900GTX is a lot faster than 8600GTS. I suppose you can also add in the G84's hardware HDV acceleration as more efficiency, but I don't know if that's all that useful because PS3 has no problems with that even with RSX.

7900 GTX http://techreport.com/articles.x/9529/1
g71smallaa1.jpg

8600 GTS http://techreport.com/articles.x/12285
g84chip2km1.jpg
 
Last edited by a moderator:
8600s are horrible, horrible GPUs. We bought ten of them for the office a year ago, and after the initial enthusiasm for the "new generation" people suddenly were very reluctant to "upgrade" from their 7800s and 7900s. And we don't have big monitors, so most of us run our games in resolutions similar to the consoles' 720p.

what kind of office has 10 computers occupied to play games?
well... almost everyone... but you undertand what i want to say...
 
If Sony had to choose a desktop GPU to lightly modify into RSX, G71 was definately the right choice. It wasn't until RV770 came around that either ATI or NVidia had a GPU with better bang for the buck, though a heavily modified non-DX10 G94 would have also been better than G71.

The reason ATI made such a great GPU in XB360 was that they designed it specifically for a console. Eventually I was convinced by MfA that binned tiling would have been even better, provided that the dev did left-to-right object sorting, but EDRAM still made a lot of sense for this application. The high setup rate and compact, unified shader core were great as well.
 
Correct me if i am wrong. The RSX while having the computation power of 7900, its performance is actually closer to 7600. With the latest drivers, the 8600 is faster than 7600 and close to the 7900, beating it in newer games @ 720p resolution.

Can those working closely with RSX comment on the extra computational power of RSX does it help in what is a 7600 rendering specs. Is it not better if Sony chose a smaller die with less shader units and use the saved silicones for other graphics boosting features like bandwidth and fillrate, not requiring a 8600 USA shader core?
 
Correct me if i am wrong. The RSX while having the computation power of 7900, its performance is actually closer to 7600. With the latest drivers, the 8600 is faster than 7600 and close to the 7900, beating it in newer games @ 720p resolution.

Can those working closely with RSX comment on the extra computational power of RSX does it help in what is a 7600 rendering specs. Is it not better if Sony chose a smaller die with less shader units and use the saved silicones for other graphics boosting features like bandwidth and fillrate, not requiring a 8600 USA shader core?

RSX performance is not closer to the 7600. RSX is basically a 7900 with the same amount of shaders units etc, but with a 128bit bus (and thus fewer ROPs obviously). A 7900 has more bandwidth etc than the RSX, but in terms of shader power its on par.

A 7600 has less pixel shader units (16) and less vertex units (5) vs the RSX's 24 pixel shaders units and 8 vertex units.
 
RSX performance is not closer to the 7600. RSX is basically a 7900 with the same amount of shaders units etc, but with a 128bit bus (and thus fewer ROPs obviously). A 7900 has more bandwidth etc than the RSX, but in terms of shader power its on par.

A 7600 has less pixel shader units (16) and less vertex units (5) vs the RSX's 24 pixel shaders units and 8 vertex units.

We should be a bit more specific when we say 7900 because RSX is definatly not on par with a 7900GTX in terms of shader power. The 7900GTX is a full 30% faster in that measure.

RSX is much closer to a 7800GTX in terms of math capability. The closest comparison however would be a 7900GT. Even then though it still has only half the fill rate and memory bandwidth so performance would be lower in a lot of situations.

I don't know how the 8600GTS compares to the 7900GT in more modern games but I can;t see it getting beat by much. And in a console the 8600GTS's architectural advantages could be better utilised. FP10 rendering for example (i'm pretty sure G8x supports that) would have been a nice addition for Sony considering Xenos already has it.
 
Back
Top