The Official X1900 Reviews Thread

Discussion in '3D Hardware, Software & Output Devices' started by Geo, Jan 23, 2006.

  1. Jawed

    Legend

    Joined:
    Oct 2, 2004
    Messages:
    11,716
    Likes Received:
    2,137
    Location:
    London
  2. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    R580 seems to be seriously "software limited". Only FEAR and 3dmark06 show marked improvements over GTX-512 SLI.

    Huh?
     
  3. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    I'd like to call it design limited ;)
     
  4. Kynes

    Newcomer

    Joined:
    Nov 28, 2004
    Messages:
    40
    Likes Received:
    3
    Location:
    South of Spain
    "NVIDIA GeForce 7800 GTX 512 SLI is represented by Golden Sample cards from Gainward, that's why the frequencies are 580/1760 MHz instead of 550/1700 MHz."
     
  5. Dave Baumann

    Dave Baumann Gamerscore Wh...
    Moderator Legend

    Joined:
    Jan 29, 2002
    Messages:
    14,090
    Likes Received:
    694
    Location:
    O Canada!
  6. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
  7. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Huh? Why would high res make any difference?

    The only thing of note that I see in those benchmarks is that neither card is performing below what you might expect at the highest resolutions.
     
  8. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    because it has about the same fillrate and bandwidth.
     
  9. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    But the per-pixel processing demands are independent of resolution, and the per-pixel memory bandwidth demands are slightly less at higher resolution.
     
  10. Sobek

    Sobek Locally Operating
    Veteran

    Joined:
    Dec 17, 2004
    Messages:
    1,774
    Likes Received:
    18
    Location:
    QLD, Australia
    This is the question i've been dying to get an answer for.

    I have a Lanparty NF4 Ultra-D modded to an SLI-DR and I really want to find out via definite testing methods wether or not a Crossfire setup can function on an NF4 chipset...has anyone heard and rumors or seen any decent threads on any enthusiast forums about it? I'm coming up dry in my search :(
     
  11. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    But....
    You wouldn't expect it to be more fillrate limited at higher res:???:
     
  12. superguy

    Banned

    Joined:
    Jan 27, 2006
    Messages:
    472
    Likes Received:
    9
    Digit life calls it straight.

    For the life of me I cant understand why ATI didn't go with at least a few more TMU's.

    They might have even been better of going with 32 shaders instead of 48, in exchange for more TMU's. Leaving a overall still smaller chip that could have been clocked higher! As it is, 48 shader processors are just useless much of the time due to TMU limits.

    But 48 shaders is fine, but they needed more TMU's.
     
  13. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, sure, moreso than CPU-limited, or vertex processing-limited. But changing the resolution doesn't change the ALU to TEX ratio, and it doesn't dramatically change what work you do for each pixel. So the results Dave shows are exactly what I would expect, provided the highest-resolutions are capable of making use of all of ATI's memory bandwidth optimizations (which wasn't the case previously...I don't know what the res limit is now...).
     
  14. superguy

    Banned

    Joined:
    Jan 27, 2006
    Messages:
    472
    Likes Received:
    9
    And of course by the time the games catch up with architecture, everybody who buys $600 cards will have long since moved on. We are talking G80 and such are nearly around the corner. Six months even is forever. So being forward looking at the expense of today really is simply a bad strategy here.
     
  15. Luminescent

    Veteran

    Joined:
    Aug 4, 2002
    Messages:
    1,036
    Likes Received:
    0
    Location:
    Miami, Fl
    Perhaps for the consumer who plays legacy, but not for the forward-looking developer who will have to adjust his engine style in order to accomodate the increasing disparity between processor logic and memory constraints.
     
    #355 Luminescent, Feb 8, 2006
    Last edited by a moderator: Feb 8, 2006
  16. Moloch

    Moloch God of Wicked Games
    Veteran

    Joined:
    Jun 20, 2002
    Messages:
    2,981
    Likes Received:
    72
    I see what you mean now :smile:
     
  17. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Dude, what's your friggin problem?

    The GTX 512 hammers ATI in texture rate, and has more bandwidth too. Yet how many games does it win in? Doom 3 and Quake 4? Even there the win has absolutely nothing to do with texture units.

    R580 is not a future only design. It does really well today in many games. The mistake was not in putting too many shader units, but rather in designing for such good dynamic branching that won't appear in games for a while.
     
  18. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    Well, without AA the GTX 512 is usually ahead. So I don't think you can quite attribute ATI's frequent lead with AA to its shader processing rate.
     
  19. superguy

    Banned

    Joined:
    Jan 27, 2006
    Messages:
    472
    Likes Received:
    9

    It does well in todays games, but not nearly well enough.

    If you go down the list 512 GTX wins what, 30% of benchmarks? Sure a lot are opengl. X1900 does not dominate it.

    And in other titles, maybe it beats the 512 by 8-10% or something. Only in a few games, like FEAR, does it really stretch out.

    The 7900 at this rate will probably KILL it. THat's another thing, sure it looks good now. With no new competition.

    When you read "48 pipes" from a company that was competing with 16, I think most people where somewhat underwhelmed. And most reviews, as well, with many openly questioning ATI's strategy with so much die area and not that much improvement, so I'm hardly the only one.

    All I'm saying is, wouldn't 20 or 24 TMU's have given much greater performance across the board? Seems a no brainer. Then ATI wouldn't have to trot out this "future proof" architecture line to defend themselves for the lackluster benches all the time.
     
  20. Tim

    Tim
    Regular

    Joined:
    Mar 28, 2003
    Messages:
    875
    Likes Received:
    5
    Location:
    Denmark
    Not from the reviews I have read. In extremetechs review the XTX is ahead even without AA in all but one game(q4). In anandtechs review, the xtx is faster in all non-AA exept quake 4 and Black & White 2.

    http://www.extremetech.com/article2/0,1697,1914656,00.asp
    http://anandtech.com/video/showdoc.aspx?i=2679

    It is hardly a surprize that the XTX does "bad" against the GTX512 in stencil heavy games like Quake 4 and Doom 3 when no AA is used, NVIDIA has double Z in all situations but Ati needs AA to enabled before double Z kicks in.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...