Why Does ATI Get Beat Down on OGL

I play some games in OGL on my PE. And i see no performance issues? But then how would i compare... The deal is there is no show stoppers in the drivers for OGL, and if they do arise its fixed very fast.... If a new driver has a new issue with a OGL app, go back to the driver that works untill its fixed in a new dirver. No diff than NVDA... The issue of perfomance is real about frame rate, if i get 50fps with my PE, and a 7800 gets 80... well i can still play fine at 50fps. Hellbinder you blowing smoke out your ass. Again. I have quite a few freinds who prefer the FGL cards over the quatros. AND vice virsa.
 
I remember having tremendous performance problems in City of Heroes for months with a Radeon 9700 Pro. It was fixed when I bought a GeForce 6800.

The City of Heroes performance problems were, for me, a showstopping bug. The game was close to unplayable in some zones, for no apparent reason. Stepped up to the 6800 and I could play all zones at higher resolution at higher framerates.

And if you're going to take the stance of, "I can still play it," why aren't you buying $60 DX9 cards instead?
 
Chalnoth said:
Old, looking for newer:
http://www.xbitlabs.com/news/video/display/20031002161856.html

Edit:
Well, I know that current data exists, but I haven't found it for less than $5000.

Well, not as reliable as you are looking for, but possibly indicative of the answer:

http://www.theinquirer.net/?article=25131

"Nearly one of every two" would point at the mid-40's answer that Orton suggested in the conference call last week. The 80% can of whup-ass on the mobile front would suggest how they made such dramatic inroads "while we weren't watching" so to speak.
 
bigz said:
Man, look at the huge discrepancy with the following numbers from June:
http://www.firingsquad.com/hardware/nvidia_geforce_7800_gtx/page15.asp

The 7800 has improved a good 17%, though half of that is likely the different test system (FX-55 vs. FX57).

ATI's X1800XT, however, scores equal to the X850XT.

The old Firingsquad numbers, however, have almost no relation to XBitLab's numbers, even though both used the same drivers. Maybe a different game level?

It's really hard to make sense of all this with such a huge variance in data. If you look back to the 9700 days, the 5900 Ultra was being beaten. Yup, NVidia being beaten in an OpenGL game by a lower model from ATI:
http://firingsquad.com/hardware/sapphire_atlantis_9800_pro_ultimate_review/page8.asp
Then the X800XT and the 6800U were quite close:
http://firingsquad.com/hardware/ati_radeon_x800/page15.asp
In the 7800 review, the framerates jumped up, as they did in the X1800 review.

What confuses me most is that ATI's hardware is scaling similarly to NVidia's with resolution. I don't know how that can be due to drivers, and that's why I always thought it was due to Doom3 and Riddick using stencilled shadows the Carmack reverse way. Maybe poor memory management? I would think that it's very easy for this sort of low level driver code to be shared between DX and OGL drivers. It doesn't make sense to me.
 
Mintmaster said:
Man, look at the huge discrepancy with the following numbers from June:
http://www.firingsquad.com/hardware/nvidia_geforce_7800_gtx/page15.asp

The 7800 has improved a good 17%, though half of that is likely the different test system (FX-55 vs. FX57).

ATI's X1800XT, however, scores equal to the X850XT.

The old Firingsquad numbers, however, have almost no relation to XBitLab's numbers, even though both used the same drivers. Maybe a different game level?

It's really hard to make sense of all this with such a huge variance in data. If you look back to the 9700 days, the 5900 Ultra was being beaten. Yup, NVidia being beaten in an OpenGL game by a lower model from ATI:
http://firingsquad.com/hardware/sapphire_atlantis_9800_pro_ultimate_review/page8.asp
Then the X800XT and the 6800U were quite close:
http://firingsquad.com/hardware/ati_radeon_x800/page15.asp
In the 7800 review, the framerates jumped up, as they did in the X1800 review.

What confuses me most is that ATI's hardware is scaling similarly to NVidia's with resolution. I don't know how that can be due to drivers, and that's why I always thought it was due to Doom3 and Riddick using stencilled shadows the Carmack reverse way. Maybe poor memory management? I would think that it's very easy for this sort of low level driver code to be shared between DX and OGL drivers. It doesn't make sense to me.
They used the 81.82 'reviewers' drivers in the X1800XT review, which are said to have similar performance to 81.84 - I beta tested them over the weekend before X1k launch but had issues with them in some titles. Release 80 brings some new performance improvements and should be pretty good when the bugs disappear.
 
Last edited by a moderator:
Colourless said:
HyperZ doesn't work with the 'Carmack' Reversed method. Kills the ATI efficency.
Are you sure about that? I thought 9800 had some changes for that. 3DCentre did an article stating this, and then updated it backtracking on it - I thought that related to HiZ.
 
I too thought that ATI had fixed the Hyper-Z issues with doing work on z-fail sometime since the R300 was originally released.

By the way, here's a simple test that you can do that essentially proves that ATI has worse OpenGL drivers. Run UT2004 (or UT2003) in Direct3D and OpenGL, comparing the performance (same settings and all, which means turning shadows off in D3D). You will find that nVidia hardware performs as well or better in OpenGL, while ATI hardware performs noticeably worse.

Additionally, the fact that ATI hardware still scales similarly to nVidia hardware with resolution would seem to indicate that when running in OpenGL, ATI has simply not implemented the same optimizations in running their own hardware as they have in Direct3D. This would mesh well, for example, with tables showing lack of support for all Crossfire modes in OpenGL.
 
Hellbinder said:
Especially if you don’t take it seriously and make it an actual priority.

Which is obviously the case here. It does not take *years* to Write a stinking driver. I bet a team of 5 could write a driver from scratch in 5 months (ok but at least within 12).

They have been supposedly working on this for almost 3 years now. They have KNOWN they needed to fix their OpenGL driver since the 8500 and have *supposedly* been taking it seriously since the 9700.

Your right. Thats why games take 3 months to write and cost $5,000.00 to produce. :rolleyes:

If you have ever coded in C++ you would have an appreciation for the effort that goes into writing software.

I would imagine that with drivers the compatiblity/quality testing would be a significant effort.

Why do you think they don't come out with drivers for each architecture? They design the architecture so they only have to make small driver changes.

One little item on the release notes of a monthly catalyst release probably took a team of developers six months to troubleshoot, resolve, test, and implement.
 
Colourless said:
HyperZ doesn't work with the 'Carmack' Reversed method. Kills the ATI efficency.
Yup, that's why I always thought the whole OpenGL driver thing was overblown. Especially with what I saw back in the R300 days. Chalnoth, I know ATI theoretically has no fillrate disadvantage with AA on for the stencil only passes, but without Hi-Z they'll be drawing a lot of pixels they don't need to. NVidia can even skip stenciled areas very quickly.

Say, Colourless, do you think it would help ATI to use Hi-Z with z-fail (i.e. stencil passes), and only Top-of-the-pipe-Z with z-pass (lighting passes)? I'm thinking that the latter will give a good enough rejection rate given how many cycles it takes to complete the lighting shader.

Dave, I don't think the 9800 fixed very much. I heard something about z-cache, but in order to fix the Hi-Z issue ATI would have to implement a min-max Hi-Z scheme, which would definately require a significant number of transistors. If you want to support the same resolutions, then you'd need to double the HiZ memory, among other things.

Chalnoth, does ATI scale similarly in UT2003/UT2004 as well? If you're right, then I guess it is just a matter of laziness with respect to OpenGL. I just find it unlikely that such optimizations would have anything to do with the API, and I would think a little code sharing would be a simple matter.
 
Mintmaster said:
Yup, that's why I always thought the whole OpenGL driver thing was overblown. Especially with what I saw back in the R300 days. Chalnoth, I know ATI theoretically has no fillrate disadvantage with AA on for the stencil only passes, but without Hi-Z they'll be drawing a lot of pixels they don't need to. NVidia can even skip stenciled areas very quickly.
That's all well and good, but nVidia shows a lead even in OpenGL games that don't use any stencil shadowing.

Chalnoth, does ATI scale similarly in UT2003/UT2004 as well? If you're right, then I guess it is just a matter of laziness with respect to OpenGL. I just find it unlikely that such optimizations would have anything to do with the API, and I would think a little code sharing would be a simple matter.
Unfortunately, I don't have that data. I didn't do scaling tests, as I was only interested in comparing one card to another (and I no longer have the hardware to re-do the tests, unfortunately). And yes, you'd think some code sharing would be a simple matter, which is what makes me think that ATI just doesn't care.
 
rwolf said:
I am sure they care. They have optimized for Doom3 a fair bit.
Doom 3 is a special case. And even then ATI seems to have only achieved a similar performance deficiency to their deficiency in other games, like City of Heroes (and soon City of Villains), Neverwinter Nights, Knights of the Old Republic, Riddick, and others.
 
Chalnoth said:
Doom 3 is a special case. And even then ATI seems to have only achieved a similar performance deficiency to their deficiency in other games, like City of Heroes (and soon City of Villains), Neverwinter Nights, Knights of the Old Republic, Riddick, and others.

I agree completely. They must have some issues with the core of their OpenGL driver that are difficult to fix.
 
Another issue might be that the game developers wrote the optimal path for Nvidia extentions. Then they wrote a workaround for ATI that was less then optimal.

I think the performance of Doom3 had more to do with someone having an axe to grind then anything else.
 
rwolf said:
Another issue might be that the game developers wrote the optimal path for Nvidia extentions. Then they wrote a workaround for ATI that was less then optimal.
I don't think that's the case. Take Doom 3 as an example: the "default" renderer uses no NV extensions, and in fact uses the ARB_fragment_program extension, which was originally written by ATI.

I think the performance of Doom3 had more to do with someone having an axe to grind then anything else.
There's no way that that is the case. The only suggested inefficiency here is with the "Carmack's reverse" algorithm for stencil shadows. Carmack chose this algorithm because it's mathematically more efficient than more "standard" stencil shadow volume methods, and turns out to be more robust as well.

Carmack has been noted for preferring vendor-agnostic algorithms and extensions. This can be seen very clearly for his choice of this algorithm for its mathematical efficiency, and for his choice to drop the NV extensions for the NV3x and higher (even though the NV3x might well have benefit hugely from using the NV3x NV extensions....though nVidia may well just be using shader replacement to keep performance up).

You can read Carmack's own writeup about why he implemented this algorithm here:
http://developer.nvidia.com/object/robust_shadow_volumes.html
 
Chalnoth said:
Doom 3 is a special case. And even then ATI seems to have only achieved a similar performance deficiency to their deficiency in other games, like City of Heroes (and soon City of Villains), Neverwinter Nights, Knights of the Old Republic, Riddick, and others.

Kotor's problems seem to lie with Vertex buffering that works on nv2x+ but which totally wrecks ati cards.
The developer released a workaround and it should be fixed in 5.9.
But once again it shows that a ported hardware feature on nV's xbox architecture wrecks modern day ati GPU's.. intentionally or not.
 
Back
Top