FP16 and market support

I'm not much of a coder, admittedly, but from what I have read here and elsewhere the only thing that needs to be done is for each bit of code that can run partial precision, a "precision hint enable" instruction gets placed in front of it. The driver should handle the rest automatically.

That way hardware that does not allow for precision hinting renders at full precision while other cards drivers downsample and render at partial precision.
 
Its not a question of just "doing it" it a question of evaluating where it is correct to do it - if you just apply it everywhere you can introduce precision issues.

There is no chance of having 3DMark03 changed now since that would alter the historical scores. NVIDIA had ample opportunity whilst they were on the Beta program to register their interest in partial precision in the application and considering ATI is invariant to partial precision they wouldn't have argued, the fact that its not part of 3DMark03 is a clear suggestion that NVIDIA didn't deem it an issue at the time.
 
Reverend said:
Changing precision is not an easy thing to do when you're talking about a whole application.

The patches to 3DM03 that FM has done are extremely easy by comparison when you're talking about what those patches are. Re-ordering and changing instructions are much easier than changing changing precision.

In case you don't know, of course.
I don't know why that would be.

First of all, you're not changing precision throughout the whole program. You're changing precision in shaders. Secondly, all one needs to do is examine what precision is needed for each variable. That shouldn't be overly hard, given a few simple rules (i.e. use full precision for anything that will be used to calculate a texture address, use partial precision for all simple color processing, etc.).
 
I'm aware it is possible to have precision issues, but that should usually only occur when you are recursively modifying the shader many times, which I would suggest is an easy situation to spot.

Pretty easy to narrow down too, i would have thought, simply comment out the PP enable hint you think is causing grief and test. If you were correct, remove that precision hint.

I don't know what nVidia did or did not discuss with futuremark regarding partial precision, but, bear in mind DX9 was originally multi-precision, which microsoft then changed without explanation, and the PP hinting is IMO there to assist hardware designed with the original multi-precision operation in mind.
 
I don't know what nVidia did or did not discuss with futuremark regarding partial precision, but, bear in mind DX9 was originally multi-precision, which microsoft then changed without explanation, and the PP hinting is IMO there to assist hardware designed with the original multi-precision operation in mind.

a.) this makes no sense "DX9 was originally multi-precision"... "which microsoft then changed without explanation, and the PP hinting is IMO there to assist hardware designed with the original multi-precision operation in mind" -> that the same thing

b.) you are dealing in think but rumour here. Microsotf have explicitly stated they didn't "change anything without explaination" but that there was an a typographic error in the early specification, but if you read it correctly anyway there was little room for interpretation. I was in a room with a bunch of journalists when MS's Craig Peeper said as much.
 
gkar1 said:
Exactly my thoughts, the perils are for the hardware manufacturers who don't follow the spec by offering lower than required precission.
I'd rather say the specs should follow the hardware manufacturers. Or would you still have the same opinion if DirectX required 32-bit float precision so ATI was not compliant?

Sigh, fanboys... :rolleyes:
 
Nick said:
I'd rather say the specs should follow the hardware manufacturers. Or would you still have the same opinion if DirectX required 32-bit float precision so ATI was not compliant?

Sigh, fanboys... :rolleyes:

You're saying that people should write a spec *after* the hardware that is supposed to support that spec has been built? :rolleyes: What is the point of a spec if no one uses it to build their graphics cards? :rolleyes:

If the spec had been for 32bit, then that is what ATI would have built. Nvidia would probably have built a part that would run 64bit at single frame speeds and 24 bit faster, but under spec. :rolleyes:
 
You're saying that people should write a spec *after* the hardware that is supposed to support that spec has been built? :rolleyes: What is the point of a spec if no one uses it to build their graphics cards? :rolleyes:
Point taken but, I don't think it's inherently bad to go beyond specifications. With OpenGL, Nvidia has the luck to be able to produce a lot of extensions that have proven to be very efficient. But while waiting for OpenGL 2.0, it's stuck with DirectX specifications while the hardware can actually do a lot more (which could have been exposed in OpenGL).

Purely from a hardware manufacturer's point of view, and I know this sounds like monopoly strategies, yes it would be better if hardware was designed first and then the API was written. See, for my software renderer I also have my own API, where several special features can be accessed, but now I'm planning to wrap a DirectX and/or OpenGL interface around it but it will inherently make things less efficient than they really are.

On the other hand, I fully agree that a common interface is necessary for compatibility reasons. All I'm saying is that API specifications have a tremendous influence on hardware performance. It's not as simple as it sounds to design a whole new generation of chips for one API.

If the spec had been for 32bit, then that is what ATI would have built. Nvidia would probably have built a part that would run 64bit at single frame speeds and 24 bit faster, but under spec. :rolleyes:

Remember DirectX 8.1? Wasn't it desiged specifically so that ATI could expose ps 1.4 which actually had a totally different specification from the rest? I'd say a DirectX 9.1 to answer Nvidia's demands would be fair right now...
 
What would you add in DX9.1 to help NVIDIA's performances ?
FP16 support ? Already in DX9.
Better compiler ? No need for a new DX revision.

So what would you add in DX9.1 to improve performances of GeForce FX ?
The only thing I can think about is to enable FX12 for the "old" FX. But it should be called DX8.9 and not DX9.1...
 
Chalnoth said:
My point is that FP16 is plenty for many calculations, and the support of FP16 should not be seen as a drawback of the NV3x architecture.
*sigh*
My point is that the requirement to use it for decent performance means that FP16 support should not be seen as an advantage either - which you seem to think it is.
 
DaveBaumann said:
NVIDIA had ample opportunity whilst they were on the Beta program to register their interest in partial precision in the application and considering ATI is invariant to partial precision they wouldn't have argued

Do you really believe this? ATI would just go oh yeah sur, run at less precision than us and we will call it fair... IMO that would never ever have happened. But it seems NV is already just running it at partial precision anywya :).
 
radar1200gs said:
The XGI/Volari solution is a complete and utter joke. Only those who sincerely believed bitboys would take over the 3d world will take this chipset seriously.
So as far as I'm concerned there are currently 3 viable DX9 architectures out there.

You're kidding me, right?
Considering how unstable the Deltachrome OpenGL ICD is, lack of AF, FSAA (S3 hopes to have FSAA2x up to 1024, if you use a higher resolution, no FSAA for you, etc.), many shaders are not working properly, Shadermark2 doesn't work properly same as other shader programs, etc.

If you believe that the Volari is "a complete and utter joke" then the Deltachrome is much worse in all regards, considering the Volari has less issues and has working FSAA 4x.
 
I agree, until they have drivers capable of at least running basic shader programs correctly, neither the S3 nor XGI can be considered possible DX9 solutions. Until they get AF and FSAA working, they can't even be considered 3D graphic cards.
 
Bouncing Zabaglione Bros. said:
You're saying that people should write a spec *after* the hardware that is supposed to support that spec has been built? :rolleyes: What is the point of a spec if no one uses it to build their graphics cards? :rolleyes:
Um, that is what happens. Modern architectures have their designs pretty much set somewhere between 18-24 months prior to release. That's quite a bit before the API is designed.

Do you really think that ATI designed the R300 after DirectX 9, which wasn't released until nearly 6 months after the R300 was released?
 
Sxotty said:
DaveBaumann said:
NVIDIA had ample opportunity whilst they were on the Beta program to register their interest in partial precision in the application and considering ATI is invariant to partial precision they wouldn't have argued

Do you really believe this? ATI would just go oh yeah sur, run at less precision than us and we will call it fair... IMO that would never ever have happened. But it seems NV is already just running it at partial precision anywya :).
Why not? ATI could possibly raise concerns/objections in such a case but there would be little point to them continuing with such an argument; as Dave has already pointed this out, the use of the modifier was irrelevant to them. Let's face it - ATI had a DX9 product released well before 3DMark03 was even initially planned on being launched and they also had plenty of time to work on their DX9 drivers thanks to the late launch of DX9 itself; they could afford to be magnanimous ;).
 
Nick said:
Point taken but, I don't think it's inherently bad to go beyond specifications. With OpenGL, Nvidia has the luck to be able to produce a lot of extensions that have proven to be very efficient. But while waiting for OpenGL 2.0, it's stuck with DirectX specifications while the hardware can actually do a lot more (which could have been exposed in OpenGL).
No, it's not. It's not even stuck with OpenGL 2.0 specifications, as nVidia can still add extensions to add functionality to GLSL. And one of the primary problems with the DirectX 9.0 and the NV35+ isn't partial precision, but rather the intermediate assembly format. GLSL totally gets rid of that, and will make it much easier for nVidia to properly optimize (though it probably will take some time: remember it took nVidia about six-eight months to get reasonably-general optimizations for the NV3x in DX9).

Remember DirectX 8.1? Wasn't it desiged specifically so that ATI could expose ps 1.4 which actually had a totally different specification from the rest? I'd say a DirectX 9.1 to answer Nvidia's demands would be fair right now...
DirectX 8.1 also added PS 1.2 and 1.3.
 
Chalnoth said:
Remember DirectX 8.1? Wasn't it desiged specifically so that ATI could expose ps 1.4 which actually had a totally different specification from the rest? I'd say a DirectX 9.1 to answer Nvidia's demands would be fair right now...
DirectX 8.1 also added PS 1.2 and 1.3.

Hopefully MS has learned their lesson from that mistake and won't splinter shaders within their API like that again.
 
Chalnoth said:
Um, that is what happens. Modern architectures have their designs pretty much set somewhere between 18-24 months prior to release. That's quite a bit before the API is designed.

Do you really think that ATI designed the R300 after DirectX 9, which wasn't released until nearly 6 months after the R300 was released?

Do you really think ATI and Nvidia didn't know and have a great deal of input into what the API would be a long, long time before the API was released to the public? Don't you think both Nvidia and ATI knew what the major parts of DX9 would be long before spending millions on designing and building a card for it?
 
Back
Top