Will DX 9 games be faster than equivalent DX 8 games?

g__day

Regular
I am grappling with

1) how much of the R300 and NV30 are geared towards DX 8 vs DX 9 optimisations

2) whether DX 9 will give us 80% greater realism in games and 20% more performance - or the other way around.

3) This is prehaps a too simple question but would the equivalent game written with both a DX 8 and a DX 9 interface run faster under DX 9 because more of the power of the R300 / NV30 can be accessed?

4) If say 3d Mark 2001 SE was written entirely as a selective DX 8 or DX 9 test bed would you expect way different results on the DX 8 vs DX 9 version running on either NV30 or R300?

Apologises if these are too open questions. I realise there is big difference between DX 8 and shader based 3D gaming algorithms that DX 9 allows. I guess I understand you can do more complicated things in simpler ways under DX 9 then under DX 8?
 
g__day said:
I am grappling with

1) how much of the R300 and NV30 are geared towards DX 8 vs DX 9 optimisations

They have their usual DX 8 support of features but the rest is geared towards DX 9. Of course they do all their usual memory architectures tweaks and optimisations etc...

2) whether DX 9 will give us 80% greater realism in games and 20% more performance - or the other way around.

80% better graphics in future games and 20% performance.

3) This is prehaps a too simple question but would the equivalent game written with both a DX 8 and a DX 9 interface run faster under DX 9 because more of the power of the R300 / NV30 can be accessed?

Yes. Depending what game of course. :) I don't think the performance will increase if you sit and satre at a window with nothing being rendered other than the window. :)

4) If say 3d Mark 2001 SE was written entirely as a selective DX 8 or DX 9 test bed would you expect way different results on the DX 8 vs DX 9 version running on either NV30 or R300?

Yes, as long as the DX 9 variant of 3dmark is using all the optimisations for the cards and contains the same quality of graphics.

Apologises if these are too open questions. I realise there is big difference between DX 8 and shader based 3D gaming algorithms that DX 9 allows. I guess I understand you can do more complicated things in simpler ways under DX 9 then under DX 8?
 
I think it'll give us the same old stuff, until DX10 is out and then everybody will be arguing about which hardware will support the new features even though the current features aren't even being used.

But thats just going from history. Maybe things will change. :)
 
RussSchultz said:
But thats just going from history. Maybe things will change. :)

suuuure things will change. ;) just like nVidia and ATi screwed up and now we have Matrox, Bitboys, SiS, 3DLabs and Trident competing on top of high end gaming cards. ;)
 
Nappe1 said:
RussSchultz said:
But thats just going from history. Maybe things will change. :)

suuuure things will change. ;) just like nVidia and ATi screwed up and now we have Matrox, Bitboys, SiS, 3DLabs and Trident competing on top of high end gaming cards. ;)

I hope you're not implying that it's either ATI's or NV's fault that any of the other pre-mentioned IHV's didn't succeed either to survive in the graphics market, not be as competitive or never release a piece of hardware on shelves.
 
By "equivalent"- I take that to imply things that can be done the same with both versions of DirectX, from which my answer is "there is no difference."

Meat and potatoes DX code, when compiled and run under current DirectX 9.0 RC's has little to no change in performance nor any improvement. If anything, I would say many things have actually lost a hair of performance, but it's mostly a wash.

As far as new DX 9.0 capabilities, the same pretty much goes. A new API/Library doesnt magically breathe bandwidth or fillrate into hardware, so it's not like the remaining crippling points for some IHVs vanish under DX 9.0.

The only argument I can see for improved performance is in the case of PS/VS 2.0, when used optimally by an IHV with DX9.0 and non-optimally for 1.1/1.2/1.4. An example might be a 1.4 shader in DX8.1 that has previously required fallback/1.1 or multiple passes and all this has been optimized by the IHV in 2.0 w/ DX9. So if the game designer goes back and rewrites this portion of the game to use 2.0, the new and improved support may yield small improvements.

As far as "looking better"- there isnt anything I've seen that even scratches the surface of PS/VS 1.1 to date (IHV "tech demos" excluded). Although the NV30/R300 will help plant bigger numbers of market base for developers to target this generation, the slice of the pie will still be likely 80%+ mx400 owners or less. If we see more than one or two games with advanced shaders by the end of the year, I'll be highly shocked- nonetheless any that even fully exploit DX8.1b levels. Im betting we will see Doom3, then one or two really poor, low budget games with very, very basic shader effects (ala aquanox) or possibly one decent budget game with a single shader "gimmick" (ala Morrowind) at
best.
 
C&C renegade has a shader for (some of) the water and it looks sweeeet. but MAN it looks out of place when you compare it to the rest of the game. that just sucks.
 
Ailuros said:
Nappe1 said:
RussSchultz said:
But thats just going from history. Maybe things will change. :)

suuuure things will change. ;) just like nVidia and ATi screwed up and now we have Matrox, Bitboys, SiS, 3DLabs and Trident competing on top of high end gaming cards. ;)

I hope you're not implying that it's either ATI's or NV's fault that any of the other pre-mentioned IHV's didn't succeed either to survive in the graphics market, not be as competitive or never release a piece of hardware on shelves.

if you didn't get it, that was supposed to be something called self-irony.

so, no I didn't imply that. I didn't want make it look like anyone's fault those didn't succeeded like originally planned, but more like a note that anythiong can happen. this time, that anything was R300 which made a leap on performace, while all the mentioned suffered bad delays.

so, simply: C'est la vie.
or like we finns say: "you cannot always win. Not even everytime"
 
Sage said:
C&C renegade has a shader for (some of) the water and it looks sweeeet. but MAN it looks out of place when you compare it to the rest of the game. that just sucks.

Which one?

The reflective (EMBM) water from the earlier screenshots that looked like quicksilver?

Or the newer blue one that looks suspiciously like the one we did in S.W.I.N.E. ? (It looks like at least someone saw that game ;) )

It doesn't needs shaders it can be composited from 3 texture layers and runs on any card that can do multitexturing (ours at least).
Of course saying "look we use pixel shaders" is much cooler isn't it?
 
g__day said:
I am grappling with

1) how much of the R300 and NV30 are geared towards DX 8 vs DX 9 optimisations

The question makes little sense for the R300 as it supports one precision only and therefore executes DX8 and DX9 code with the same speed.

The NV30 on the other hand has faster-but-lower-precision arithmetic so it will execute DX8 precision code faster than a DX9 one.

2) whether DX 9 will give us 80% greater realism in games and 20% more performance - or the other way around.

Neighter.
It will bring similar performance with similar quality.
Or it will bring lower performance with better quality.

Nothing is free in 3D. ;)

3) This is prehaps a too simple question but would the equivalent game written with both a DX 8 and a DX 9 interface run faster under DX 9 because more of the power of the R300 / NV30 can be accessed?

There are a few things that both GF3 and R8500 are supported that are finally being exposed in DX9.
Like auto-mipmap generation.

Branching in VS2.0 or the higher instuction limit in PS2.0 can be such a thing.

But I wouldn't expect much speed impovement in practice. Not that you'll ever be able to tell...

4) If say 3d Mark 2001 SE was written entirely as a selective DX 8 or DX 9 test bed would you expect way different results on the DX 8 vs DX 9 version running on either NV30 or R300?

No.
 
Didnt some people report that Morrowind is faster all of a suddent with DX9 drivers and DX9 API installed?

Havent had time to verify this myself yet....

But it could be that the drivers are better rather than DX9 having anything to do with it.
 
misae said:
Didnt some people report that Morrowind is faster all of a suddent with DX9 drivers and DX9 API installed?

Havent had time to verify this myself yet....

But it could be that the drivers are better rather than DX9 having anything to do with it.

If you mean ATI's DX9 drivers, they are faster than the latest DX8 ones. They are also a lot more buggier.

In theory a game can benefit from DX9 if it was runtime limited, but it's highly unlikely.
 
Auto-mipmap generation? I seem to remember that you could force the driver to do auto-mipmap generation on the RIVA TNT for textures that weren't mipmapped in the first place?
 
So is DX 9 only easier to code in than DX8 - does it give 1) no increased performance and 2) no new features?

I hoped (wished really) that at any comparable level of colour detail (i.e. keeping the colour precision the same between DX 8 / 9) it would be better tuned (i.e. 1%- 5% faster).

Predominantly I hoped DX 9 would allow more powerful (version 2.0) shaders to do more work faster on the R300 or NV30 than DX 8 or the shaders version 1.0 allows. Is this definitely not the case?
 
DX 9 adds many image quality features. I don't think anyone has said it doesn't give new features...

DX 9 shaders are more powerful, so in some cases it can enable similar effects more quickly.

I think it is confusing to discuss it this way...for instance, "Nothing is free in 3D" is confusing because if you dish out the extra cash for DX 9 hardware, you've already paid...

I.e., the consumer does get things for free for some of the DX 9 functionality being supported by the hardware.
 
g__day said:
So is DX 9 only easier to code in than DX8 - does it give 1) no increased performance and 2) no new features?

I think there's a misunderstanding. There are new features.
But most of the speed improvement is likely to come from the higher overall performance of the new cards, and not from the API change.

I hoped (wished really) that at any comparable level of colour detail (i.e. keeping the colour precision the same between DX 8 / 9) it would be better tuned (i.e. 1%- 5% faster).

With the same applications, and same drivers?
Only if the app was API limited.
I don't think it applies to any of the current applications.
It's not like it's not possible...

Predominantly I hoped DX 9 would allow more powerful (version 2.0) shaders to do more work faster on the R300 or NV30 than DX 8 or the shaders version 1.0 allows. Is this definitely not the case?

It depends. If you use the new functionality (like sin/cos in VS or rsq in PS) then it will likely be faster than the similar approximation done in the older version.
The only game I can think of in this matter is Doom3, but it's an OpenGL game so it's irrelevant to this discussion.
 
arjan de lumens said:
Auto-mipmap generation? I seem to remember that you could force the driver to do auto-mipmap generation on the RIVA TNT for textures that weren't mipmapped in the first place?

That was a feature for lazy developers who don't bother to generate mipmaps themselfs. Around that time render-to-texture was not used at all. (If it was supported by drivers at all.)

When rendering to textures you can't pregenerate mipmap levels, the only correct way to do it to generate them after rendering. But it needs hardware support or it would be too slow to be useful.

The hardware support is there, it was exposed in OpenGL for some time, but DX9 is the first where it's available in D3D.
 
Back
Top