An 8600gts RSX instead of a 7800 gtx

Discussion in 'Console Technology' started by MBDF, Oct 27, 2008.

  1. MBDF

    Newcomer

    Joined:
    Dec 29, 2004
    Messages:
    175
    Likes Received:
    0
    Location:
    North Vancouver, Canada
    I'm curious... how would a 8600gts (32 shaders at 1500 mhz, 128 bit 32MB bandwidth) have compared to the RSX we have today... I have a hunch it would have been slightly less powerfull but more developer friendly... am I wrong in assuming that?
     
  2. Thowllly

    Regular

    Joined:
    Feb 6, 2002
    Messages:
    551
    Likes Received:
    4
    Location:
    Norway
    Don't know real world numbers, but in theory the 8600gts have less than half the Gflops... It might be slightly more developer friendly, but swapping the rsx for something slower but more dev friendly would have been a horrible decision...

    The PS3s problem is not that it's GPU is not developer friendly enough, not by a billion miles, thats the very very least of the ps3s problems....
     
  3. FirewalkR

    Regular

    Joined:
    Jul 13, 2007
    Messages:
    259
    Likes Received:
    0
    The only thing this thread will accomplish is to make me cry because of what could have been...

    (if only Sony wanted to release a $800 console :lol:)

    actually, over here in europe, they did!!
     
  4. Sc4freak

    Newcomer

    Joined:
    Dec 28, 2004
    Messages:
    233
    Likes Received:
    2
    Location:
    Melbourne, Australia
    But take a look at the Xbox 360. A far less powerful CPU than the Cell, but vastly more developer friendly.

    Hardware is useless if you can't write software for it. That said, sacrificing half your performance to make it a little friendlier seems a bit much.
     
  5. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,998
    Likes Received:
    1,503
    It may have been overall better. The 8600gtx I believe is smaller than the 7800s . Its also more efficent with its shaders. It most likely would have lost alot of fill rate but then again we are targeting 720p and sometimes 1080p so you don't need oodles of fillrate.

    i think the main failure of the ps3 is the lack of ram compared to everything else in the system. They gave us this great storage capacity and then saddled us with 512 megs of ram with originaly up to 64MBs of it taken up for the os (now down to the 40s i believe). Just by giving us 256 megs with the cell and 512 megs with the gpu we would easily see a large graphical diffrence between the 360 and the ps3


    They could have gone with a smaller cheaper cpu and use the silcon budget on a larger gpu. A modified 8800gtx would have done wonders for the ps3.

    Of course I still believe another 256megs of ram would have been the best bang for the buck. use a smaller cpu and add in the extra ram and things would have changed from day one
     
  6. Cheezdoodles

    Veteran

    Joined:
    May 24, 2006
    Messages:
    3,930
    Likes Received:
    24
    Are GFLOPS really that releveant measure for games these days?
     
  7. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,001
    Likes Received:
    6,205
    Location:
    ಠ_ಠ
    In terms of shading throughput, yes. Put one way, theoretical max is arguably "less of a lie" for a fixed platform than on PC.

    Depends on your point of view and how low-level you go. I would *imagine* there are peculiarities due to certain engineering decisions that exist not normally seen by higher-level coding - the point being that there were plenty of optimizations that occurred between the two generations.

    But I digress...

    And clearly, a G80-derived chip would not have been ready in time for mass production. The PS3 was supply constrained enough at the end of 2006 :!: But that's not the point of the original poster...
     
  8. eastmen

    Legend Subscriber

    Joined:
    Mar 17, 2008
    Messages:
    9,998
    Likes Received:
    1,503
    was it constrained because of the gpu or the bluray drive.

    That is the question.
     
  9. TheAlSpark

    TheAlSpark Moderator
    Moderator Legend

    Joined:
    Feb 29, 2004
    Messages:
    21,001
    Likes Received:
    6,205
    Location:
    ಠ_ಠ
    My point was, with the PS3 already supply constrained, why bother making the problem worse with a GPU that wasn't as easily mass manufactured - you're talking about a lol-sized monolithic GPU compared to the G71-ish sized die.

    G80 - 420mm^2
    G71 - 196mm^2

    That is also ignoring the much much higher power requirements for G80 as well.
     
  10. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,583
    Likes Received:
    704
    Location:
    Guess...
    The 8600GTS actually has 60% the raw overall shader power of RSX. However because of the unified and scalar design its quite a bit more efficient.

    G71 also uses some of that shading power for texture addressing so overall i don't think the difference between the two in terms of pixel shader capability would be that huge.

    The GTS would obviously win when it comes to heavy vertex shading loads as well as geometry setup and pixel fill rate limited situations (especially with 4xMSAA).

    It has a little less theoretical texturing capability (90% the raw power of RSX) but over 50% more bandwidth directly to the GPU. However taking Cells bandwidth into account as well it could be roughly a match there.

    Add in the additional capabilities of of G8x over G7x (DX10, FP10 support, HDR+MSAA etc..) and I think the 8600 GTS could have been just as effective in the PS3 as RSX is, if not more so.

    Thats ignoring the cost and timing implications of course.
     
  11. Rangers

    Legend

    Joined:
    Aug 4, 2006
    Messages:
    12,430
    Likes Received:
    1,228
    It seems to me consoles are the place where maximum efficiency is going to be derived. I'd guess RSX greater shader power (which 8600GTS could not match) is close to tapped in PS3.

    In other words 8600GTS greater efficiency is probably much more of a benefit on PC, where hardware adapts to the software and not vice versa. In PS3, whatever GPU is in place will be mostly fully utilized. So one would think the edge goes to RSX and it's greater raw specs.

    What might have been more interesting is say, a "tweener" 9600GT type chip (64 shaders) in PS3. We know 8800GTX was too large, but when you say "custom G80" I'm envisioning something halfway between 8600GTS and 8800GTX might have made a lot of sense for PS3. Of course it would have had to have been planned out well in advance.

    Edit: I've done some looking around at 9600GT die size and it appears it would have been too large. According to this website it's 225 mm^2 @ 65nm. Also 505 million transistors (versus RSX alleged ~300m). Would have been too beefy for 90nm PS3 apparantly.
     
    #11 Rangers, Oct 27, 2008
    Last edited by a moderator: Oct 27, 2008
  12. assen

    Veteran

    Joined:
    May 21, 2003
    Messages:
    1,377
    Likes Received:
    19
    Location:
    Skirts of Vitosha
    8600s are horrible, horrible GPUs. We bought ten of them for the office a year ago, and after the initial enthusiasm for the "new generation" people suddenly were very reluctant to "upgrade" from their 7800s and 7900s. And we don't have big monitors, so most of us run our games in resolutions similar to the consoles' 720p.

    An 8600 has 32 scalar units, which (if you ignore efficiency gains) are roughly equal to 8 float4 units, across vertex and pixel shading. Compare this to a 6600/7600 which have 8 float4 pixel units AND 3 float4 vertex units, and you see why an 8600 needs all of its increased clock speed and efficiency gains not to be embarrassed by a previous generation GPU.
     
  13. sebbbi

    Veteran

    Joined:
    Nov 14, 2007
    Messages:
    2,924
    Likes Received:
    5,293
    Location:
    Helsinki, Finland
    Raw hardware specs are always a good thing, but unfied shaders provide considerably higher efficiency also on closed platforms.

    In a single frame rendering, there are a lot of steps that benefit greatly from unified shaders:
    - Shadow map rendering: Heavily vertex shader bound (no pixel shader at all)
    - Post process effects (motion blur, hdr/bloom, depth of field, ambient occlusion, etc): Heavily pixel shader bound (no vertex shader at all)

    On a non-unified architecture, lots of shader performance is wasted every frame on shadow map rendering and on post process rendering steps. Either pixel shaders of vertex shaders are just idling during that time. Real time shadows and post process effects are much more important features now compared to the last console generation. Most games use around 25-50% of their frame time to calculate these effects.
     
  14. swaaye

    swaaye Entirely Suboptimal
    Legend

    Joined:
    Mar 15, 2003
    Messages:
    8,492
    Likes Received:
    598
    Location:
    WI, USA
    It does seem like 360 is the more effective game hardware. Oh gosh this statement will probably cause angst.... But really, there have been more than a few ports that worked out better visually on 360 and it does seem like the GPU would be the culprit here. Either that or the split vs. unified RAM setup.

    G7x has issues with things too, such as texture filtering perf/quality and AF performance/quality. These are aspects that I would be surprised to find ATI Xenos having issues with considering how things were with R5x0/R4x0 vs G7x/NV4x. But then again, Forza 2 had bilinear filtering and low AF (or nonexistent AF) so who really knows.

    8600 is rather ucky aside from a pure ALU utilization efficiency perspective. However, you also need to consider that it was on 80nm and was still rather large. I think it's as big as G71. So while it may be more efficient from the usage of ALUs, it's not more efficient from a transistor perspective. 7900GTX is a lot faster than 8600GTS. I suppose you can also add in the G84's hardware HDV acceleration as more efficiency, but I don't know if that's all that useful because PS3 has no problems with that even with RSX.

    7900 GTX http://techreport.com/articles.x/9529/1
    [​IMG]
    8600 GTS http://techreport.com/articles.x/12285
    [​IMG]
     
    #14 swaaye, Oct 27, 2008
    Last edited by a moderator: Oct 28, 2008
  15. fehu

    Veteran Regular

    Joined:
    Nov 15, 2006
    Messages:
    1,538
    Likes Received:
    494
    Location:
    Somewhere over the ocean
    what kind of office has 10 computers occupied to play games?
    well... almost everyone... but you undertand what i want to say...
     
  16. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    If Sony had to choose a desktop GPU to lightly modify into RSX, G71 was definately the right choice. It wasn't until RV770 came around that either ATI or NVidia had a GPU with better bang for the buck, though a heavily modified non-DX10 G94 would have also been better than G71.

    The reason ATI made such a great GPU in XB360 was that they designed it specifically for a console. Eventually I was convinced by MfA that binned tiling would have been even better, provided that the dev did left-to-right object sorting, but EDRAM still made a lot of sense for this application. The high setup rate and compact, unified shader core were great as well.
     
  17. gongo

    Regular

    Joined:
    Jan 26, 2008
    Messages:
    582
    Likes Received:
    12
    Correct me if i am wrong. The RSX while having the computation power of 7900, its performance is actually closer to 7600. With the latest drivers, the 8600 is faster than 7600 and close to the 7900, beating it in newer games @ 720p resolution.

    Can those working closely with RSX comment on the extra computational power of RSX does it help in what is a 7600 rendering specs. Is it not better if Sony chose a smaller die with less shader units and use the saved silicones for other graphics boosting features like bandwidth and fillrate, not requiring a 8600 USA shader core?
     
  18. assen

    Veteran

    Joined:
    May 21, 2003
    Messages:
    1,377
    Likes Received:
    19
    Location:
    Skirts of Vitosha
    A game developers' office :)
     
    #18 assen, Oct 28, 2008
    Last edited by a moderator: Oct 28, 2008
  19. Cheezdoodles

    Veteran

    Joined:
    May 24, 2006
    Messages:
    3,930
    Likes Received:
    24
    RSX performance is not closer to the 7600. RSX is basically a 7900 with the same amount of shaders units etc, but with a 128bit bus (and thus fewer ROPs obviously). A 7900 has more bandwidth etc than the RSX, but in terms of shader power its on par.

    A 7600 has less pixel shader units (16) and less vertex units (5) vs the RSX's 24 pixel shaders units and 8 vertex units.
     
  20. pjbliverpool

    pjbliverpool B3D Scallywag
    Legend

    Joined:
    May 8, 2005
    Messages:
    7,583
    Likes Received:
    704
    Location:
    Guess...
    We should be a bit more specific when we say 7900 because RSX is definatly not on par with a 7900GTX in terms of shader power. The 7900GTX is a full 30% faster in that measure.

    RSX is much closer to a 7800GTX in terms of math capability. The closest comparison however would be a 7900GT. Even then though it still has only half the fill rate and memory bandwidth so performance would be lower in a lot of situations.

    I don't know how the 8600GTS compares to the 7900GT in more modern games but I can;t see it getting beat by much. And in a console the 8600GTS's architectural advantages could be better utilised. FP10 rendering for example (i'm pretty sure G8x supports that) would have been a nice addition for Sony considering Xenos already has it.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...