FX and PS 1.4, DX9 tests?

Chalnoth said:
Mintmaster said:
Why does GF FX care so much about whether it is DX8 or DX9? Or am I missing something?
Unoptimized drivers.

I still don't see why it would make that much difference. DX9 and DX8 will still have the same instructions for PS 1.0-1.4. Obviously PS 1.0-1.4 didn't change with DX9.

When R200 was released, a lot of people were saying that DX8.1 would make it faster. Then, when R300 was released, people were saying that DX9 would make it so much faster. Neither happened. You can say that DX9 generation games will make R300 relative faster to DX8 cards running the same game, but the API would hardly make any difference in such a simple test.

Also, why are Hellbinder's/Tahir's scores different from noko's/Brent's (from the other forums)? Is that "DX9" or "DX8" part of the header next to "mip-filter reflections" talking about the DX version or something else? If so, then I guess this means the same thing is happening with R300, but to a lesser degree. Is this some sort of flaw in the benchmark, not allowing enough time for initialization or something?
 
I tend to think "drivers unoptimized for shading" is more likely reasonable than "hardware unoptimized for shading" specifically for the comparisons in this thread. Namely, the difference is too large, IMO, for "hardware unoptimized for shading" (I don't see the GFFX being half the speed of the R300, but even slower).

That doesn't mean that I consider the GFFX outperforming the R300 when the drivers are "optimized" the most reasonable assumption, though I don't think it can be absolutely ruled out.

So...as long as you don't consider the above two assumptions equivalent (which I think is Joe's interpretation of Chalnoth's comment), I tend to also expect driver optimization to be an important factor to consider...

I consider fixed function performance another matter entirely, unless vertex shading capabilities are being wasted and sitting idle...which is not at all how I understand the presentation of the the vertex processing units as an "adaptive array". The performance of Quadro FX/viewperf and most simple texturing fixed function lighting benchmarks to me seems to indicate this is already optimized.
 
Joe DeFuria said:
Here's a hint. Take your post:
And insert it directly after this post of Chalnoth:

Then it makes no sense, because I was responding to you contending that it was the hardware that was fuxored rather than the software.
 
Then it makes no sense, because I was responding to you contending that it was the hardware that was fuxored rather than the software.

Um....OK. I' guess I need to spell it out.

I said it was unoptimized hardware. Obviously I KNOW it is way too premature to make such a judgement. I thought that the RIDICULOUSNESS and PREMATURITY of that statement would serve to illustrate the RIDICULOUSNESS and PREMATURITY of Chalnoth's statement that it is due to unoptimized drivers.

Bottom line: WE HAVE NO IDEA. So to state one way or the other is RIDICULOUS AND PREMATURE.

And on a side note, if you don't believe it's fair to "label you" as nVidia biased, then consider why you responded to MY contention, and not Chalnoths.
 
Well, gosh Joe. Since in Carmacks .plan, he stated that NVIDIA told him that the drivers were unoptimized and would get a big speed up for shaders, I have no idea where anybody go the idea that it might be the drivers rather than the hardware.

Please, take off your polarized glasses. I'm not playing that game, so quit trying to put me on a team.
 
Well, gosh Joe. Since in Carmacks .plan, he stated that NVIDIA told him that the drivers were unoptimized and would get a big speed up for shaders, I have no idea where anybody go the idea that it might be the drivers rather than the hardware.

Again...arguing points that I don't disagree with. Did Chalnoth say it "might be the drivers?" Did he give any IMPRESSION that there was anything but drivers at issue?

Gosh , Russ Carmack ALSO SAID in the very prior sentence:

so when Nvidia runs fragment programs, they are at a higher precision than ATI's, which is some justification for the slower speed.

Seems to me that Carmack is not ONLY blaming drivers, but the nature of nVidia's implementation as well.

Please, take off your polarized glasses. I'm not playing that game, so quit trying to put me on a team.

What game is that? Just admitting that you have a bias for nVidia? What's wrong with that, and why is that a game?
 
Seems to me that Carmack is not ONLY blaming drivers, but the nature of nVidia's implementation as well.

I wouldn't exactly call that blaming the nature of nvidias hardware implementation. At least not in the context of "unoptimized hardware".
 
The point is, there is no reason to believe that the performacne deficit is due solely to unoptimized drivers.

Does anyone (besides Chalnoth) really think that's the case? If not, then what's all this hub-bub about? Are people against the idea that *shock* it's likely a combination of both?
 
Well, gosh Joe. Since in Carmacks .plan, he stated that NVIDIA told him that the drivers were unoptimized and would get a big speed up for shaders, I have no idea where anybody go the idea that it might be the drivers rather than the hardware.

Well, as we've all seen over the last 6 months or so, Just because Nvidia promises something, It Definitely doesn't make it true. If that was true we'd have had the card months ago. And it also would have "revolutionized the graphics industry like nothing for 10 years before" If you can't see Joe's point then you must be trying really hard not to. Bottom line. We have no Idea what is causing the slow performance with the shaders. It may be a good guess that its the drivers, but WE DON'T KNOW. Just like we don't know if its the hardware. But Russ, being the almighty knower of everything that you are, I guess we should just listen to your GUESSES and deny anyone else the same luxury of a Guess or opinion for that matter. Because yours is the only opinion that can be right. Anything that goes against your opinion must be totally ludicrous. And you have the nerve to get on Hellbinder and Doom? :rolleyes: (I don't care how much everyone hates this smilie, this definitely deserves it.)
 
Heh, to be fair to Russ, Chalnoth is the one who made the initial statment about the drivers. In fact, I believe Russ and I agree more than we disagree on cause of the slow performance (probably due to issues with both hardware and software, but we don't really know.)

Which is why I scratch my head and wonder why he continues to be so "offended" by my points.....
 
As I said, if you consider saying the results are due to the drivers being unoptimized equivalent to the expectation that the nv30 will outperform the r300 with "drivers optimized", I say that is unreasonable

If you consider saying the results are partially due to the drivers being unoptimized, and due to issues with the way the hardware handles shading, I say that is indeed reasonable.

I can see how Chalnoth's blaming performance changes due to DX version on unoptimized drivers can be seen to be ignoring issues in the way the hardware handles shading.

I can also see how focusing on the phrase "unoptimized drivers" in isolation would prevent seeing it that way.

I can't say who was right, I'm not Chalnoth.

I can say that both interpretations seem to have a valid basis as far as I'm concerned..

I can also say, speaking for myself, that when I think of driver optimizations being an issue, my reasonining is based on the performance being quite signifcantly less than half of R300 performance in the benchmarks in this thread, not on nVidia's assurances in John Carmack's .plan file, which sound to me like they are planning on enhancing precision hinting functionality in the ARB2 code path. Note also we are talking about DirectX here and not OpenGL.

---

Am I going to be a full-time peacemaker now? :-? Nah, I'd have to be successful at it, and that remains to be seen. :p
 
Given the history of virtually every 3D card in the past few years, I'd say unoptimized pre-release drivers is a reasonable (not ridiculous) explanation for the FX's unspectacular shading performance.

Everyone lay off the bitter espresso, mkay? :p
 
ShaderMan 1.6 Graph Lovin:

Here is the graph with data that I have collected, from Brent, and just for kicks the last one is my Own Softmodded 9700 (stock 9500 speeds, with a p3 700 w 288MB ram XPSP1 Pro backing it :p, you know just for comparison)



getpicture.php





I hope they have a "-optimize like hell" setting in their driver compiler.

Seems like the GFFX is optimized at least as I can see twords PS 1.X and NOT PS2, so I don't know whats going on.

Later,

Lagg
 
Re: ShaderMan 1.6 Graph Lovin:

Seems like the GFFX is optimized at least as I can see twords PS 1.X and NOT PS2, so I don't know whats going on.

Later,

Lagg


Having a common codebase for drivers on all your cards helps you keep steady performance for old features. Considering your talking about PS2, these are first drivers they have even written with this functionality because no previous card had it. It is not unreasonable to expect major performace increases because of this. How much? We will have to wait and see.
 
I would say:

NVIDIA developed the NV30 with long/complex shaders in mind. Unoptimized drivers can kill even the fastest hardware, the driver team is in my opionion at least as important as the hardware guys, just because of the complexity of the drivers. So I would say give them a month (at least) or until the first commercial DX9 app is out (3d-mark 2003 / AquaMark) to optimize their drivers and then compare the two cards again...

Thomas
 
For fun I ran Shadermark on my 1800 XP with Radeon 9000 Pro and compared them to GFFX. To my amazement it manages to keep up with the GFFX in quite a few tests and even manages to be faster in two of them!!
Remember, this is on a 9000Pro

Code:
Before the slash:
[i]   DX9 PS 1.0 - 1.4 

   ShaderMark v1.6a - DX9 1.1 - 1.4 Pixel Shader Benchmark - ToMMTi-Systems (http://www.tommti-systems.com) 

   video mode / device info 
   (1024x768), X8R8G8B8 (D24X8) 
   HAL (pure hw vp): NVIDIA GeForce FX 5800 Ultra 
   benchmark info 
   mip filter reflections: DX9 [/i]

after the slash:

  [i]ShaderMark v1.6a - DX9 1.1 - 1.4 Pixel Shader Benchmark - ToMMTi-Systems (http://www.tommti-systems.com)

  video mode / device info
  (1024x768), X8R8G8B8 (D24X8)
  HAL (pure hw vp): RADEON 9000 Series
  benchmark info
  mip filter reflections: DX8[/i]


shaders: 
Fixed Function - Gouraud Shading 
Fixed Function - Gouraud Shading 
554.02 fps / 295.49 fps

shaders: 
Fixed Function - Diffuse Texture Mapping 
Fixed Function - Diffuse Texture Mapping 
535.56 fps / 234.34 fps

shaders: 
Fixed Function - Diffuse Bump Mapping 
Fixed Function - Diffuse Bump Mapping 
266.24 fps / 109.79 fps

shaders: 
PS 1.0 - Diffuse Bump Mapping 
PS 1.0 - Diffuse Bump Mapping 
219.40 fps / 95.20 fps

shaders: 
PS 1.0 - Diffuse Bump Mapping 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
91.64 fps / 77.78 fps

shaders: 
PS 1.1 - Bumped Diffuse + Specular 
PS 1.0 - Diffuse Bump Mapping 
234.66 fps / 90.01 fps

shaders: 
PS 1.1 - Bumped Diffuse + Specular 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
105.06 fps / 74.68 fps

shaders: 
PS 1.4 - Bumped Diffuse and Specular Lighting with per pixel Specular Exponent 
PS 1.0 - Diffuse Bump Mapping 
165.84 fps / 91.79 fps

shaders: 
PS 1.4 - Bumped Diffuse and Specular Lighting with per pixel Specular Exponent 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
88.28 fps / 75.92 fps

shaders: 
PS 1.1 - Per Pixel Anisotropic Lighting 
PS 1.0 - Diffuse Bump Mapping 
232.52 fps / 93.79 fps

shaders: 
PS 1.1 - Per Pixel Anisotropic Lighting 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
104.63 fps / 77.39 fps

shaders: 
PS 1.4 - Per Pixel Bumped Anisotropic Lighting plus Diffuse 
PS 1.0 - Diffuse Bump Mapping 
116.18 fps / 83.76 fps

shaders: 
PS 1.4 - Per Pixel Bumped Anisotropic Lighting plus Diffuse 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
71.80 fps / 70.30 fps

shaders: 
PS 1.1 - Cubic Environment Bumped Reflections 
PS 1.0 - Diffuse Bump Mapping 
152.99 fps / 67.57 fps

shaders: 
PS 1.1 - Cubic Environment Bumped Reflections 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
85.51 fps / 59.23 fps 

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Independently Colored Reflections 
PS 1.0 - Diffuse Bump Mapping 
100.32 fps / 57.17 fps

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Independently Colored Reflections 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
65.47 fps / 50.28 fps

shaders: 
PS 1.4 - Ghost Shader 
PS 1.0 - Diffuse Bump Mapping 
95.29 fps / 73.90 fps

shaders: 
PS 1.4 - Ghost Shader 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
59.35 fps / 63.43 fps          <--------------------!!!

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Tinted Reflections with per pixel Fresnel Term 
PS 1.0 - Diffuse Bump Mapping 
99.56 fps / 69.70 fps

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Tinted Reflections with per pixel Fresnel Term 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
65.47 fps / 59.82 fps

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Independently Colored Reflections 
PS 1.0 - Diffuse Bump Mapping 
108.05 fps / 70.69 fps

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Independently Colored Reflections 
PS 1.4 - Bumped Diffuse Lighting with per pixel intensity falloff 
68.91 fps / 60.08 fps

shaders: 
PS 1.4 - 4 Lights/Pass Diffuse Bump Mapping 
PS 1.4 - 4 Lights/Pass Diffuse Bump Mapping 
46.85 fps / 43.25 fps

shaders: 
PS 1.4 - 2 Spot Lights 
PS 1.4 - 2 Spot Lights 
20.93 fps / 39.89 fps     <-------------!!!

shaders: 
PS 1.4 - Cubic Environment Bumped Diffuse and Independently Colored Reflections 
PS 1.4 - 4 Lights/Pass Diffuse Bump Mapping 
42.17 fps / 40.36 fps

shaders: 
PS 1.4 - Cubic Environment Diffuse Light and Tinted Refractions 
PS 1.4 - 4 Lights/Pass Diffuse Bump Mapping 
44.95 fps / 44.32 fps

Don't know what to conclude from this (drivers? optimization for 1.4 shaders?) but I was quite amazed. Seems my budget card has some life in it still 8)
 
Okay, I just did a test with ShaderMark 1.6a on my 9700 Pro on a system very close to Brents. (AXP 2700+ vs AXP 2800+)

System specs:

Asus A7N8X (nforce2)
AXP 2700+ (166fsb)
DCDDR333 512MB
WinXP SP1
Cat 3.0a

Edit. I deleted the results: Not all the tests were displayed correctly - or at all. But since the numbers seem in line with others it looks like a bug. Thomas, any ideas?
 
Something is seriously wrong with the FX, its drivers or the benchmark as I've been trying my 8500 (overclocked at 315/315, no aa/af, mip pref at quality, texture pref at high quality) on this and got the following:

With DX8 upto PS1.4, only difference is that the 8500 had auto mip reflections off (couldnt find a way to turn it on :-? )

8500vFX_DX8.gif


With DX9 upto PS1.4 (dx8 auto mip gen on 8500)

8500vFX_DX9.gif


Even in dx9 the FX is doing very badly in the tests that need PS1.4. :|
 
Back
Top