Why Does ATI Get Beat Down on OGL

neliz said:
Kotor's problems seem to lie with Vertex buffering that works on nv2x+ but which totally wrecks ati cards.
The developer released a workaround and it should be fixed in 5.9.
But once again it shows that a ported hardware feature on nV's xbox architecture wrecks modern day ati GPU's.. intentionally or not.

It'd actually be because they probably used NV_vertex_array_range, which is supported all the way back to Geforce1(not an XBOX feature, but a way to DMA transfer geometry to the video card).

I have no idea if they updated the engine from NWN to KOTOR to support the newer ARB_vertex_buffer_object extension.
 
Chalnoth said:
Mintmaster said:
Yup, that's why I always thought the whole OpenGL driver thing was overblown
That's all well and good, but nVidia shows a lead even in OpenGL games that don't use any stencil shadowing.
Note that I used past tense. Some of the data in this thread is changing my mind.

Chalnoth said:
I don't think that's the case. Take Doom 3 as an example: the "default" renderer uses no NV extensions, and in fact uses the ARB_fragment_program extension, which was originally written by ATI.
Yeah, but Doom3 is a special case, remember? It doesn't matter how "fair" Carmack is to each IHV, his algorithms are simply not suited towards ATI hardware. The deficiency in ATI's OpenGL drivers can only be reliably found in games without stencil shadows.

For other games, I think there may be a preference. Don't people use NV's register combiners a lot? I seem to remember one game (NWN?) only enabled shiny water on NV cards.

I dunno, maybe we just need to give ATI some time. In the R300 days it seemed they were doing relatively okay in OpenGL, and even R420 wasn't that bad (see my previous post for graphs). It was hanging around the 6800.
 
Chalnoth said:
I don't think that's the case. Take Doom 3 as an example: the "default" renderer uses no NV extensions, and in fact uses the ARB_fragment_program extension, which was originally written by ATI.

I am not so sure thats the case. We have case after case of developers using NV specific calls in the past. We had it in games like PF and NWN where they could have use ARB calls but chose NV ones. I remember even the old OpenGL benchmarks (DroneZ) for example that used NV calls even though the 8500 cards supported the same features. I have always chalked it up to NV being "better" or "frist" with OpenGL drivers. Not sure if is still the case now but old habits tend to die hard....
 
jb said:
I am not so sure thats the case. We have case after case of developers using NV specific calls in the past. We had it in games like PF and NWN where they could have use ARB calls but chose NV ones. I remember even the old OpenGL benchmarks (DroneZ) for example that used NV calls even though the 8500 cards supported the same features. I have always chalked it up to NV being "better" or "frist" with OpenGL drivers. Not sure if is still the case now but old habits tend to die hard....
Kind of the same thing I was talking about.

So... yeah... any synthetic benchmarks around? Anywhere? Sometime? Just simple things like fillrate and the like?
 
AndrewM said:
I have no idea if they updated the engine from NWN to KOTOR to support the newer ARB_vertex_buffer_object extension.

It seems like ati fixed it in Cat5.9, though they don't say what.
LucasArts have deleted all references on their support site to the problem as well..

Anyway, "Disable Vertex Buffer Objects=1" was what was needed on ALL ati cards to run Kotor and Kotor2 smoothly, it's just quite obvious that the developer of this twimtbp game never ever thought about using an ARB path for these games..
 
Mintmaster said:
Yeah, but Doom3 is a special case, remember? It doesn't matter how "fair" Carmack is to each IHV, his algorithms are simply not suited towards ATI hardware. The deficiency in ATI's OpenGL drivers can only be reliably found in games without stencil shadows.

So, in an effort to prove ATi's OGL drivers really are good you have to benchmark with a game that's suited towards ATi's hardware?

Btw: "Raja Koduri is the senior architect in the hardware design group at ATI. He's responsible for system performance verification and development of various performance tools and graphics technologies inside ATI." http://www.pcwelt.de/know-how/extras/107080/index2.html

Raja Koduri said:
On technical front we worked closely with both Valve and Id Software. If it is true that Doom3 is more optimized for Nvidia hardware there are only two ways to look at it in my opinion. Nvidia technical team did a better job with Id or the ATI technical team did a bad job.
 
The Baron said:
Kind of the same thing I was talking about.

So... yeah... any synthetic benchmarks around? Anywhere? Sometime? Just simple things like fillrate and the like?
I could yell "here", but really, don't bother. There's nothing to see.
ATI cards do achieve their theoretical peaks in OpenGL just fine. It's not as if the chips are downclocked when the OpenGL driver is running or anything.

This whole issue with "poor ATI OpenGL performance" is rather irrational IMO. A Radeon 9800Pro still rapes a Geforce 6200 in e.g. Quake 3, as it should due to raw pixel and vertex performance and bandwidths. This shouldn't be the case if NVIDIA really had that magically superior OpenGL performance.
 
A bit late with this, but any advantage that NV has with the double z fillrate should be gone if MSAA is used: both ATI and NV chips have 2 z-units per pipe. So if one wants to know what percentage of performance difference is due to double z, one would look at difference in benchmarks with AA on and off.
 
stepz said:
A bit late with this, but any advantage that NV has with the double z fillrate should be gone if MSAA is used: both ATI and NV chips have 2 z-units per pipe. So if one wants to know what percentage of performance difference is due to double z, one would look at difference in benchmarks with AA on and off.

Anandtech's extended performance article?
bf2:
http://www.anandtech.com/video/showdoc.aspx?i=2556&p=2



d3:
http://www.anandtech.com/video/showdoc.aspx?i=2556&p=4
 
Last edited by a moderator:
zeckensack said:
This whole issue with "poor ATI OpenGL performance" is rather irrational IMO.
How is that irrational? It's a reaction to a large number of benchmarks. I mean, that's pretty much the definition of rational, isn't it? A belief brought about and supported by real-world evidence?
 
Chalnoth said:
How is that irrational? It's a reaction to a large number of benchmarks. I mean, that's pretty much the definition of rational, isn't it? A belief brought about and supported by real-world evidence?

So. .why does nV's D3D driver suck this much? losing in all high end D3D benchmarks?
 
neliz said:
So. .why does nV's D3D driver suck this much? losing in all high end D3D benchmarks?
God. At least nVidia's performance vs. ATI in Direct3D follows much more closely the memory bandwidth and fillate of the respective boards. In OpenGL, ATI lags behind.
 
Chalnoth said:
God. At least nVidia's performance vs. ATI in Direct3D follows much more closely the memory bandwidth and fillate of the respective boards. In OpenGL, ATI lags behind.

Even the benchmarks show that drivers for r520 in general are not all that well optimized, even half of the d3d benchmarks on some sites show that not being able to beat a x850xt-pe show problems with drivers (at least for the XL.)

Focussing on OGL as a problem alone is short-sighted... they just don't provide the same resources for game-specific optimizations, Doom3 is an example of resources providing better performance in a specific game... they just need to pump money in devrel to improve performance out-of-the-box..
 
Chalnoth said:
How is that irrational? It's a reaction to a large number of benchmarks. I mean, that's pretty much the definition of rational, isn't it? A belief brought about and supported by real-world evidence?
Dunno about the real-world evidence. See Quake 3, MDK2, Serious Sam 1st and 2nd Encounters or whatnot. That's OpenGL, too, and ATI is very competitive there. Wouldn't one single falsification suffice to shoot down a general conclusion? That's how it works in the world of maths :)

People look too hard for generalizations, and while they make life less complex they are not always useful. ATI not being competitive in Doom 3 does not mean that they stink in OpenGL. It means just the obvious: they're not that good at Doom 3. If they lost out everywhere and there's no explanation on the hardware level, there'd be a point. I don't see that yet.

I'd rather look at this case by case than quickly jumping to conclusions just for the sake of having something that's easy to remember ("ATI+OpenGL=teh suq") but not necessarily true.

Doom 3: stencil fillrate is NVIDIA's stronghold, plain and simple. Also ATI's hierarchical Z implementation doesn't like the depth test function varying too much over the course of a frame.

Riddick: soft shadows with PCF. Wouldn't surprise me at all if the game took advantage of NVIDIA's hardware acceleration which the Radeon line lacks (dunno if the R5xx series has that, even if, it might require a game patch to see it).

Etc.

Then there's the whole issue of texture filtering "optimizations" and related tricks. IIRC one of the recent Catalysts claimed a huge performance increase in IL2 just by forcing the cloud textures to a compressed format. Doing such things is not truly improving the OpenGL driver, but it makes some games run faster.
Who has how many of these "optimizations" in place, and what are the gains? This issue might pretty much bork up any comparison on its own.

And lastly, some people coming off the NV_vertex_array_range path just don't get it. NVIDIA supports some rather peculiar usage models of VBO (allocate buffer object, lock it, fill it, render it once and throw it away again; might as well use immediate mode instead), probably because it is their VAR legacy model. ATI drivers don't support such stuff all that well and rather go for a more pure VBO model (fat storage objects are for reuse).
Both approaches have some benefits over the other, and they are somewhat exclusive. ATI's model bites them more often then not. Btw, if anyone cares, technically it is the correct one IMO. VBOs are overkill and unneccessarily complex for the "one shot" usage. Different methods already exist for this case, they use less memory and are more portable, and are equally limited by AGP/PCIe bandwidth. Many developers fancy VBOs so much that they do it anyway, and it always comes back to bite ATI's reputation.

See Tenebrae, NWN. The Tenebrae technicalities wrt to unsatisfactory VBO performance on ATI hardware were discussed on these boards but I can't find the topic anymore. Might be buried in the old T&H archives. I'm pretty certain it was the issue I just tried to describe.

And finally, there are -- gasp! -- things that ATI's OpenGL driver handles more efficiently than NVIDIA's OpenGL driver. I've seen a Radeon 9200/64MB soundly beat a Geforce 3/64MB because the driver coped much better with many (thousands) small texture objects which were constantly shuffled around and refilled. This is just one scenario, and it does not imply that ATI's OpenGL drivers are better than NVIDIA's, but instead only that this specific case works better. The truth is to be found in the details more often than not IMO.
 
zeckensack said:
And lastly, some people coming off the NV_vertex_array_range path just don't get it. NVIDIA supports some rather peculiar usage models of VBO (allocate buffer object, lock it, fill it, render it once and throw it away again; might as well use immediate mode instead), probably because it is their VAR legacy model. ATI drivers don't support such stuff all that well and rather go for a more pure VBO model (fat storage objects are for reuse).
VARs are reusable. They're not very efficient because you only have one giant buffer that needs to be locked (and thus you can't really parallelize GPU and CPU work), but it's certainly not as bad as you make it out to be.

You can implement VAR if you already support VBO: Just allow one single giant VBO that is perpetulally mapped.

Also, VAR is no longer the fastest way to render vertices for real workloads on NV hardware; VBOs are.
 
zeckensack said:
Dunno about the real-world evidence. See Quake 3, MDK2, Serious Sam 1st and 2nd Encounters or whatnot. That's OpenGL, too, and ATI is very competitive there. Wouldn't one single falsification suffice to shoot down a general conclusion? That's how it works in the world of maths :)
All of these games are quite old, and thus much less interesting.

People look too hard for generalizations, and while they make life less complex they are not always useful. ATI not being competitive in Doom 3 does not mean that they stink in OpenGL.
Have you even been reading this thread?

If they lost out everywhere and there's no explanation on the hardware level, there'd be a point. I don't see that yet.
And they are losing out everywhere in benchmarks that were done, oh, this year with reasonably recent games. Every OpenGL benchmark that people have thrown at the X1x00 cards has put these cards 20%-40% behind where one would expect them based on their Direct3D performance.

Riddick: soft shadows with PCF. Wouldn't surprise me at all if the game took advantage of NVIDIA's hardware acceleration which the Radeon line lacks (dunno if the R5xx series has that, even if, it might require a game patch to see it).
Since Riddick doesn't use shadow buffers, it doesn't use PCF. And besides, nobody benchmarks with that rendering mode enabled anyway.

ATI drivers don't support such stuff all that well and rather go for a more pure VBO model (fat storage objects are for reuse).
And if games are using it, why isn't ATI optimizing for this case?

And finally, there are -- gasp! -- things that ATI's OpenGL driver handles more efficiently than NVIDIA's OpenGL driver. I've seen a Radeon 9200/64MB soundly beat a Geforce 3/64MB because the driver coped much better with many (thousands) small texture objects which were constantly shuffled around and refilled. This is just one scenario, and it does not imply that ATI's OpenGL drivers are better than NVIDIA's, but instead only that this specific case works better. The truth is to be found in the details more often than not IMO.
One would expect this to happen every once in a while. But until it happens in games, will anybody care?
 
zeckensack said:
People look too hard for generalizations, and while they make life less complex they are not always useful. ATI not being competitive in Doom 3 does not mean that they stink in OpenGL. It means just the obvious: they're not that good at Doom 3.
One thing that I find dramatically interesting is how the reverse does not apply, at least not on more popular nerd forums..

The reverse being Halflife2. ATI has always been a leader here, but the reverse ideology ("NVidia Direct3d drivers blow goats!") isn't created. As you mentioned, there are plenty of OpenGL games (mostly older) where ATI competes quite well, but the focus is creating a "scapegoat" with Doom3 to create an "ATI's suck at OGL" ideology.

Well, by the same token, doesn't NVIDIA suck at Direct3D then?
 
Sharkfood said:
Well, by the same token, doesn't NVIDIA suck at Direct3D then?

Since when did Nvidia lose in almost every Direct 3d app? I thought it was pretty even there. Nvidia wins some, Ati wins some.
 
Sharkfood said:
Well, by the same token, doesn't NVIDIA suck at Direct3D then?
Look at the X800XL vs 7800GT numbers in the most recent Direct3D and OpenGL apps for your answer.
 
bdmosky said:
Since when did Nvidia lose in almost every Direct 3d app? I thought it was pretty even there. Nvidia wins some, Ati wins some.


General rule of thumb is that ATI has always done very well following through on direct X API's and making it run especially well on their Cores. The ATI D3D dominance thing really got popular at the 9800 vs FX era, considering the FX was trash at D3D, at least DX9, so it cought on. At the X800 launch before Nvidia had released matured drivers the X800 noticablly thrashed Nvidia's equal offerings, especially when AA/AF was applied. That basically went away however after a few months of driver revisions but it still sticks.

ATI still does tend to handle D3D better but its not because of something Nvidia lacks in comparison. Most of the time if they lose heavily its solved later in drivers (waiting sucks). Nvidia's D3D has always been acceptable, before/after FX that is. I think its down to how the core is built. EQ2 and Farcry are supporters of this theory and seem to benefit from a much simpler pipeline design (ATI) then Nvidia's more complex design. The design seems to have gotten a little more complex in the recent R520 and RV530, as many benchmarks show the need for drivers, such as the NV40 during its launch. In time things will be put back to scale.
 
Back
Top