Why Does ATI Get Beat Down on OGL

Discussion in 'Architecture and Products' started by memberSince97, Oct 9, 2005.

  1. AndrewM

    Newcomer

    Joined:
    May 28, 2003
    Messages:
    219
    Likes Received:
    2
    Location:
    Brisbane, QLD, Australia
    It'd actually be because they probably used NV_vertex_array_range, which is supported all the way back to Geforce1(not an XBOX feature, but a way to DMA transfer geometry to the video card).

    I have no idea if they updated the engine from NWN to KOTOR to support the newer ARB_vertex_buffer_object extension.
     
  2. Mintmaster

    Veteran

    Joined:
    Mar 31, 2002
    Messages:
    3,897
    Likes Received:
    87
    Note that I used past tense. Some of the data in this thread is changing my mind.

    Yeah, but Doom3 is a special case, remember? It doesn't matter how "fair" Carmack is to each IHV, his algorithms are simply not suited towards ATI hardware. The deficiency in ATI's OpenGL drivers can only be reliably found in games without stencil shadows.

    For other games, I think there may be a preference. Don't people use NV's register combiners a lot? I seem to remember one game (NWN?) only enabled shiny water on NV cards.

    I dunno, maybe we just need to give ATI some time. In the R300 days it seemed they were doing relatively okay in OpenGL, and even R420 wasn't that bad (see my previous post for graphs). It was hanging around the 6800.
     
  3. jb

    jb
    Veteran

    Joined:
    Feb 6, 2002
    Messages:
    1,636
    Likes Received:
    7
    I am not so sure thats the case. We have case after case of developers using NV specific calls in the past. We had it in games like PF and NWN where they could have use ARB calls but chose NV ones. I remember even the old OpenGL benchmarks (DroneZ) for example that used NV calls even though the 8500 cards supported the same features. I have always chalked it up to NV being "better" or "frist" with OpenGL drivers. Not sure if is still the case now but old habits tend to die hard....
     
  4. Tim Murray

    Tim Murray the Windom Earle of mobile SOCs
    Veteran

    Joined:
    May 25, 2003
    Messages:
    3,278
    Likes Received:
    66
    Location:
    Mountain View, CA
    Kind of the same thing I was talking about.

    So... yeah... any synthetic benchmarks around? Anywhere? Sometime? Just simple things like fillrate and the like?
     
  5. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    It seems like ati fixed it in Cat5.9, though they don't say what.
    LucasArts have deleted all references on their support site to the problem as well..

    Anyway, "Disable Vertex Buffer Objects=1" was what was needed on ALL ati cards to run Kotor and Kotor2 smoothly, it's just quite obvious that the developer of this twimtbp game never ever thought about using an ARB path for these games..
     
  6. Richard

    Richard Mord's imaginary friend
    Veteran

    Joined:
    Jan 22, 2004
    Messages:
    3,508
    Likes Received:
    40
    Location:
    PT, EU
    So, in an effort to prove ATi's OGL drivers really are good you have to benchmark with a game that's suited towards ATi's hardware?

    Btw: "Raja Koduri is the senior architect in the hardware design group at ATI. He's responsible for system performance verification and development of various performance tools and graphics technologies inside ATI." http://www.pcwelt.de/know-how/extras/107080/index2.html

     
  7. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    I could yell "here", but really, don't bother. There's nothing to see.
    ATI cards do achieve their theoretical peaks in OpenGL just fine. It's not as if the chips are downclocked when the OpenGL driver is running or anything.

    This whole issue with "poor ATI OpenGL performance" is rather irrational IMO. A Radeon 9800Pro still rapes a Geforce 6200 in e.g. Quake 3, as it should due to raw pixel and vertex performance and bandwidths. This shouldn't be the case if NVIDIA really had that magically superior OpenGL performance.
     
  8. stepz

    Newcomer

    Joined:
    Dec 11, 2003
    Messages:
    66
    Likes Received:
    3
    A bit late with this, but any advantage that NV has with the double z fillrate should be gone if MSAA is used: both ATI and NV chips have 2 z-units per pipe. So if one wants to know what percentage of performance difference is due to double z, one would look at difference in benchmarks with AA on and off.
     
  9. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Anandtech's extended performance article?
    bf2:
    http://www.anandtech.com/video/showdoc.aspx?i=2556&p=2



    d3:
    http://www.anandtech.com/video/showdoc.aspx?i=2556&p=4
     
    #69 neliz, Oct 12, 2005
    Last edited by a moderator: Oct 12, 2005
  10. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    How is that irrational? It's a reaction to a large number of benchmarks. I mean, that's pretty much the definition of rational, isn't it? A belief brought about and supported by real-world evidence?
     
  11. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    So. .why does nV's D3D driver suck this much? losing in all high end D3D benchmarks?
     
  12. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    God. At least nVidia's performance vs. ATI in Direct3D follows much more closely the memory bandwidth and fillate of the respective boards. In OpenGL, ATI lags behind.
     
  13. neliz

    neliz GIGABYTE Man
    Veteran

    Joined:
    Mar 30, 2005
    Messages:
    4,904
    Likes Received:
    23
    Location:
    In the know
    Even the benchmarks show that drivers for r520 in general are not all that well optimized, even half of the d3d benchmarks on some sites show that not being able to beat a x850xt-pe show problems with drivers (at least for the XL.)

    Focussing on OGL as a problem alone is short-sighted... they just don't provide the same resources for game-specific optimizations, Doom3 is an example of resources providing better performance in a specific game... they just need to pump money in devrel to improve performance out-of-the-box..
     
  14. Rolf N

    Rolf N Recurring Membmare
    Veteran

    Joined:
    Aug 18, 2003
    Messages:
    2,494
    Likes Received:
    55
    Location:
    yes
    Dunno about the real-world evidence. See Quake 3, MDK2, Serious Sam 1st and 2nd Encounters or whatnot. That's OpenGL, too, and ATI is very competitive there. Wouldn't one single falsification suffice to shoot down a general conclusion? That's how it works in the world of maths :)

    People look too hard for generalizations, and while they make life less complex they are not always useful. ATI not being competitive in Doom 3 does not mean that they stink in OpenGL. It means just the obvious: they're not that good at Doom 3. If they lost out everywhere and there's no explanation on the hardware level, there'd be a point. I don't see that yet.

    I'd rather look at this case by case than quickly jumping to conclusions just for the sake of having something that's easy to remember ("ATI+OpenGL=teh suq") but not necessarily true.

    Doom 3: stencil fillrate is NVIDIA's stronghold, plain and simple. Also ATI's hierarchical Z implementation doesn't like the depth test function varying too much over the course of a frame.

    Riddick: soft shadows with PCF. Wouldn't surprise me at all if the game took advantage of NVIDIA's hardware acceleration which the Radeon line lacks (dunno if the R5xx series has that, even if, it might require a game patch to see it).

    Etc.

    Then there's the whole issue of texture filtering "optimizations" and related tricks. IIRC one of the recent Catalysts claimed a huge performance increase in IL2 just by forcing the cloud textures to a compressed format. Doing such things is not truly improving the OpenGL driver, but it makes some games run faster.
    Who has how many of these "optimizations" in place, and what are the gains? This issue might pretty much bork up any comparison on its own.

    And lastly, some people coming off the NV_vertex_array_range path just don't get it. NVIDIA supports some rather peculiar usage models of VBO (allocate buffer object, lock it, fill it, render it once and throw it away again; might as well use immediate mode instead), probably because it is their VAR legacy model. ATI drivers don't support such stuff all that well and rather go for a more pure VBO model (fat storage objects are for reuse).
    Both approaches have some benefits over the other, and they are somewhat exclusive. ATI's model bites them more often then not. Btw, if anyone cares, technically it is the correct one IMO. VBOs are overkill and unneccessarily complex for the "one shot" usage. Different methods already exist for this case, they use less memory and are more portable, and are equally limited by AGP/PCIe bandwidth. Many developers fancy VBOs so much that they do it anyway, and it always comes back to bite ATI's reputation.

    See Tenebrae, NWN. The Tenebrae technicalities wrt to unsatisfactory VBO performance on ATI hardware were discussed on these boards but I can't find the topic anymore. Might be buried in the old T&H archives. I'm pretty certain it was the issue I just tried to describe.

    And finally, there are -- gasp! -- things that ATI's OpenGL driver handles more efficiently than NVIDIA's OpenGL driver. I've seen a Radeon 9200/64MB soundly beat a Geforce 3/64MB because the driver coped much better with many (thousands) small texture objects which were constantly shuffled around and refilled. This is just one scenario, and it does not imply that ATI's OpenGL drivers are better than NVIDIA's, but instead only that this specific case works better. The truth is to be found in the details more often than not IMO.
     
  15. Bob

    Bob
    Regular

    Joined:
    Apr 22, 2004
    Messages:
    424
    Likes Received:
    47
    VARs are reusable. They're not very efficient because you only have one giant buffer that needs to be locked (and thus you can't really parallelize GPU and CPU work), but it's certainly not as bad as you make it out to be.

    You can implement VAR if you already support VBO: Just allow one single giant VBO that is perpetulally mapped.

    Also, VAR is no longer the fastest way to render vertices for real workloads on NV hardware; VBOs are.
     
  16. KimB

    Legend

    Joined:
    May 28, 2002
    Messages:
    12,928
    Likes Received:
    230
    Location:
    Seattle, WA
    All of these games are quite old, and thus much less interesting.

    Have you even been reading this thread?

    And they are losing out everywhere in benchmarks that were done, oh, this year with reasonably recent games. Every OpenGL benchmark that people have thrown at the X1x00 cards has put these cards 20%-40% behind where one would expect them based on their Direct3D performance.

    Since Riddick doesn't use shadow buffers, it doesn't use PCF. And besides, nobody benchmarks with that rendering mode enabled anyway.

    And if games are using it, why isn't ATI optimizing for this case?

    One would expect this to happen every once in a while. But until it happens in games, will anybody care?
     
  17. Sharkfood

    Regular

    Joined:
    Feb 7, 2002
    Messages:
    702
    Likes Received:
    11
    Location:
    Bay Area, California
    One thing that I find dramatically interesting is how the reverse does not apply, at least not on more popular nerd forums..

    The reverse being Halflife2. ATI has always been a leader here, but the reverse ideology ("NVidia Direct3d drivers blow goats!") isn't created. As you mentioned, there are plenty of OpenGL games (mostly older) where ATI competes quite well, but the focus is creating a "scapegoat" with Doom3 to create an "ATI's suck at OGL" ideology.

    Well, by the same token, doesn't NVIDIA suck at Direct3D then?
     
  18. bdmosky

    Newcomer

    Joined:
    Jul 31, 2002
    Messages:
    178
    Likes Received:
    48
    Since when did Nvidia lose in almost every Direct 3d app? I thought it was pretty even there. Nvidia wins some, Ati wins some.
     
  19. trinibwoy

    trinibwoy Meh
    Legend

    Joined:
    Mar 17, 2004
    Messages:
    12,059
    Likes Received:
    3,119
    Location:
    New York
    Look at the X800XL vs 7800GT numbers in the most recent Direct3D and OpenGL apps for your answer.
     
  20. SugarCoat

    Veteran

    Joined:
    Jul 17, 2005
    Messages:
    2,091
    Likes Received:
    52
    Location:
    State of Illusionism

    General rule of thumb is that ATI has always done very well following through on direct X API's and making it run especially well on their Cores. The ATI D3D dominance thing really got popular at the 9800 vs FX era, considering the FX was trash at D3D, at least DX9, so it cought on. At the X800 launch before Nvidia had released matured drivers the X800 noticablly thrashed Nvidia's equal offerings, especially when AA/AF was applied. That basically went away however after a few months of driver revisions but it still sticks.

    ATI still does tend to handle D3D better but its not because of something Nvidia lacks in comparison. Most of the time if they lose heavily its solved later in drivers (waiting sucks). Nvidia's D3D has always been acceptable, before/after FX that is. I think its down to how the core is built. EQ2 and Farcry are supporters of this theory and seem to benefit from a much simpler pipeline design (ATI) then Nvidia's more complex design. The design seems to have gotten a little more complex in the recent R520 and RV530, as many benchmarks show the need for drivers, such as the NV40 during its launch. In time things will be put back to scale.
     
Loading...

Share This Page

  • About Us

    Beyond3D has been around for over a decade and prides itself on being the best place on the web for in-depth, technically-driven discussion and analysis of 3D graphics hardware. If you love pixels and transistors, you've come to the right place!

    Beyond3D is proudly published by GPU Tools Ltd.
Loading...