Why isnt the R580 a lot faster in Shader intensive games?

Discussion in 'Architecture and Products' started by boltneck, Jan 25, 2006.

  1. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    I must confess that I am a little bit surprised. You don’t use an automatic test framework that will check a very large number of real game sequence against every new driver build and compare the results (visual and performance) with older driver versions? I had expected that this is something that the computer at ATI do at night when nobody need them.
     
  2. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    I think Reverend meant more something along the line that synthetic shader benchmarks are useless in the wrong hands.

    In extension if a reviewer can't interpret results from a synthetic application correctly, then I'm not so sure that his/hers interepretations from real games would be of any actual use too.

    All that aside, you'll have an extremely hard time convincing me that either/or IHV are not in fact fine-tuning their drivers for synthetic applications after all. Just as with any key-application/game out there.

    Finally you can't make them all happy; let alone both major IHVs. One of the two will always feel unfairly treated in the end.
     
  3. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
    If you've seen pictures of the huge racks upon racks of test PCs that they have to use to test their software...

    Jawed
     
  4. Demirug

    Veteran

    Joined:
    Dec 8, 2002
    Messages:
    1,326
    Likes Received:
    69
    I have seen such pictures. This is one of the reasons why I am surprised. I can not believe that the run all the tests over and over again by hand.
     
  5. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
    Dave?

    What I'm getting at is a marked difference in the improvements from R520 to R580 between Rightmark, a PS2.x test, and Shadermark V2.1, which is using PS3.0 shadercode on PS3 boards - likewise, R5xx running PS3.0 is still nowhere near the performance of the ATI PS2.0 path in Splinter Cell: Chaos Theory. Now I haven't looked, but unless Shadermark shaders are just very short in all cases (or shorter than Rightmarks shaders), then we'd probably ideally see similar increases, likewise, unless Splinter Cell's ATI's PS2.0 path is just not doing a lot of work that the 3.0 path is then I would conclude that the optimiser isn't working as efectively with PS3.0 code than is is with PS2.0 (which, given the amount of time you've had to work on PS2.0 in relation to PS3.0, wouldn't necessarily be much of a surprise).
     
  6. Ailuros

    Ailuros Epsilon plus three
    Legend Subscriber

    Joined:
    Feb 7, 2002
    Messages:
    9,511
    Likes Received:
    224
    Location:
    Chania
    Considering that I just read your interview with Eric D. and Richard H., this thread is a pleasant followup ;) Uhmmm sorry for the OT.
     
  7. g__day

    Regular

    Joined:
    Jun 22, 2002
    Messages:
    580
    Likes Received:
    2
    Location:
    Sydney Australia
    ATi seem to be making a very interesting call here - code your 3d pixel shaders using more maths and fewer texture fetches, if your target is a X1900 card. Another complication for folk dealing with 3d graphics engines.

    Reminds me of when 3d Now! and SSE were first introduced. What happens if NVidia take the opposite approach (dumber but more raw power)? A 3d coder now has to ponder major algorithim changes and game design changes to cater for one variant of an IHVs latest video cards.

    If NVidia varies its strategy than that could be yet another variant of code paths needed to exploit 3d that has to be fit into your 3d engine.

    And you need your shader compilers to be better optimised.

    I ponder what 3d game developers think about all this and what there feedback to ATi and NVidia is on such pointed 3d h/w design traits?
     
  8. _xxx_

    Banned

    Joined:
    Aug 3, 2004
    Messages:
    5,008
    Likes Received:
    86
    Location:
    Stuttgart, Germany
    That would be senseless and they know that too..
     
  9. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    It's not like Ati is forcing developers to switch their routines right? Ati guestimated the shader/texture usage in the near future and came up with 3:1.
    FEAR is a good example of this right? it's shader intensiver than other apps.
    It was runing good on r520 in comparison to g70. then nvidia released their magic80 driver and now the r580 shows that "hardware optimized for shader performance" gives serious benefits over hardware that is not.
    I doubt that FEAR is optimized for ATi anyway (being twimtbp and all)... r580 shows that r580's setup benefits shader intensive hardware more..

    Id looks like the only develloper still interested in upping the textures and texture operations
     
    #29 neliz, Jan 25, 2006
    Last edited by a moderator: Jan 25, 2006
  10. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY
    I don't think the direction of GPU architecture will vary, both compaines seem to be going in the same direction. This is seen with the shader tests. Both cards perform at similiar levels at least with current tests.

    This kind of shows it self in the pure speed tests in 3dmark 06.

    http://www.xbitlabs.com/articles/video/display/radeon-x1900xtx_40.html

    There isn't much difference between either the 7800 512 or the x1900xt. Then again they might not be using long enough shaders, and this might give the x1900xt an advantage. But in the next year or so games aren't expected to use long enough shaders to possibly show this.
     
  11. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    Well I would like some nice large textures :D

    What was the reason that doom 3/ quake 4 have low res textures and low poly counts?
     
  12. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    It was tailored for nVidia hardware ;) (rimshot [H] style)
     
  13. Razor1

    Veteran

    Joined:
    Jul 24, 2004
    Messages:
    4,232
    Likes Received:
    749
    Location:
    NY, NY

    Cost vs. performance vs. target system :wink: if Quake 4 used 1024x1024 textures sizes it would need 1.0 gig of vram to have the same number of textures it has now. pretty much other then the characters they use 512x512.
     
  14. OpenGL guy

    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    2,357
    Likes Received:
    28
    The relatively low polygon counts were due to cost of the shadowing algorithm. Stencil shadows are very expensive and the more polygons you have, the more shadow volumes you need to compute. Basically, you need to extrude all the edges of an object's silouette, so higher polygon counts means more edges.
     
  15. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    Actually, yes, we do. There's racks of systems testing all the time. However, they can miss visual problems and they don't test every single app out there, at all resolutions and modes. That would take weeks or months per runtime. As well, not every single source code change is checked -- Sometimes there are multiple between checks (assuming daily writes). So yes, we do have regressions in performance or quality, at times.
     
  16. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    Nope, he's still around. Not sure if he posts -- "Earth to calling Dave, do you post?"?
    Well, I don't know about the rightmark/shadermark examples, personally. I know we ran lots of internal testing on shaders, and they ended up doing well. I do know that it will take some more time for the shader compiler to catch up on the 3 ALU thing, as some things can be compiled in more optimal ways for that ratio.

    As for splinter cell, from what I remember, it's not doing at all the same work under PS3 as under PS2, so they cannot be compared. I think if you take an X1800 or X1900 and force it to take PS2, it should boost up performance. But I'm speaking from memory here (which I hate doing).
     
  17. Geo

    Geo Mostly Harmless
    Legend

    Joined:
    Apr 22, 2002
    Messages:
    9,116
    Likes Received:
    215
    Location:
    Uffda-land
    Reasonable to expect that X1600 performance benefits from those marginal improvements as well?
     
  18. sireric

    Regular

    Joined:
    Jul 26, 2002
    Messages:
    348
    Likes Received:
    22
    Location:
    Santa Clara, CA
    I think anything that will benefit X1800 will benifit X1600, but there's other stuff that should benefit X1600 that should come up, over time.
     
  19. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    thanks.
     
  20. Nv500

    Banned

    Joined:
    Apr 30, 2004
    Messages:
    33
    Likes Received:
    0
    To be honest, there are so many games can be labelled as "shader-intensitive", and it happen to be that FEAR is a game that ATI did quite well compare to its competitor, so maybe thats the reason why FEAR become an overplayed shader-intensitive game in a Fanatic-intensitive forum?

    IMVHO, AOE 3 is more PS 3.0 shader-intentisive, the reason for that is:

    1, In-game setting, even at low res with AA off: Simply switch shader from high Q to ultra-high Q usually you get 50%-70% performance hit.

    2, As long as you believe 3DM05 or 06 PS shader tests are shader-intensitive, the performance of those video cards in AOE3 at high Q agrees with 3DM05 or 06 raw PS shader benmark results closely (i.e. G70-512 is 2X faster than R520, and R580 is about as fast as G70-512).
     
    #40 Nv500, Jan 25, 2006
    Last edited by a moderator: Jan 25, 2006
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...