FP16 and market support

jpaana said:
Heh, leaving out this:
Tridam said:
Radeon 9800 Pro HLSL : 125 MPix/s
Radeon 9800 Pro Cg : 100 MPix/s
25% difference for non-Cg tailored hardware is not close in my book.
Um, I thought the point was that Cg was optimized for nVidia's hardware? Of course the R3xx would receive a drop in performance (this is why, for Cg to succeed, ATI really needed to write their own back-end compiler for it, or nVidia had to write their own optimized compiler for ATI's hardware).
 
Doomtrooper said:
The Doom 3 engine is also extremely overy-hyped...and getting dated quickly :!:

The Half-Life 2 engine is extremely overy-hyped...and looking inferior already.

Look everyone can have an opinion!

Reverend said:
DOOM3 is not where the importance of higher precision really matters. Textures hardly are the best example for stressing the importance of FP.

BTW rev, JC specifically asked for hardware makers to include higher precision in their next gen cards b/c doom3 would look crappy w/o it. He did ask for 64bit precision in specific, as well as asking for possibly higher precision support and that may have a lot to do with why he is supporting NV even though they are looking fairly bad w/respect to ATI at the moment.
I beleive the update was in april of 2000 or so.
 
Doomtrooper I think you did not understand what I wrote.

Sxotty said:
and that may have a lot to do with why he is supporting NV even though they are looking fairly bad w/respect to ATI at the moment.

My point was that since NV did exactly what he said, and ATI did more perhpas he felt responsible for their poor decision of fp16 support since he specifically asked for it, and thus it is causing him to support NV to a greater extent than they deserve.

edit: in other words, I realized 64bit oclor was fp16. :) And I think we are all happy that higher precisions exist now.
 
Neeyik said:
Unfair because you posted those images in response to a comment about FP16 and associated rendering errors with shaders - those in the rthdribl demo are due to the render target precision not shader precision.

Neeyik, there have been many examples posted of the lack of precision..3D mark 03 showed it very well. (Nature scene, elephant with black speckles, lighting in Aquamark 3) The point is lighting will expose that weakness especially in the NV30 class boards.
I don't think that a demo written with HLSL and follows spec is 'unfair' either, what is unfair is the amount of BS spread by one IHVs PR machine preying on the uneducated consumer.
 
Chalnoth said:
Um, I thought the point was that Cg was optimized for nVidia's hardware? Of course the R3xx would receive a drop in performance (this is why, for Cg to succeed, ATI really needed to write their own back-end compiler for it, or nVidia had to write their own optimized compiler for ATI's hardware).

Ok, I was talking about Cg compiler sucking compared to DX HLSL compiler as you were praising NVidia and Cg and characterising DX HLSL compiler as "rather poor". Still Cg losing even on NVidia hardware pretty much makes that point anyway. In any case, it would still be interesting to see how much the new NVidia "general" shader optimizer improves the Cg code which obviously can be made better even on PS assembly level already and comparison to the ps2a profile of the DX HLSL compiler.
 
radar1200gs wrote:
secondly, you don't know how good or bad NV3x's HDR rendering is since nVidia hasn't exposed the feature in its drivers yet - you are simply hoping it will be bad (I seem to recall 3dfx thinking they had nVidia beat with FSAA too... - never say never).


It is exposed under their OGL extensions, it can't currently be exposed under DX because it doesn't conform to normal DX specifications.

There is no single "right" way of doing anything in 3d. If nVidia is supporting HDR the OpenGL way then I'd suggest that way has a certain credibility to it. This is simply more proof of microsoft doing everything they possibly can to screw nVidia over. What are "normal DX specifications" anyhow, compared to OpenGL specifications that carry the support of companies such as SGI, 3DLAbs etc? This is the exact same issue as FP16 which has been used for years and years to produce professional work. If its good enough for professional use its certainly good enough for gaming.



DX9 wasn't released when R300 was out so the shipping drivers were DX8. All the operation are carried out internally at FP precision and converted at input and output to the relevant require precision.

Did I ever say anything about how the operations are carried out? R300 had an issue with certain older games, NOLF2 was proof of that and one of your ATi guys confirmed that they rewrote older DX support in this very forum. Truth hurts, doesn't it?

Its so obvious that Microsoft clearly favors ATi over nVidia with DX9 that it isn't even funny. And I wouldn't say R300 was designed for DX9, rather I would say DX9 was altered for R300.
 
Could post a link to the thread? I do seem to recall an issue involving ATI cards and NOLF, but don't remember exactly what it was or what was said about it.
 
radar1200gs said:
radar1200gs wrote:
secondly, you don't know how good or bad NV3x's HDR rendering is since nVidia hasn't exposed the feature in its drivers yet - you are simply hoping it will be bad (I seem to recall 3dfx thinking they had nVidia beat with FSAA too... - never say never).

It is exposed under their OGL extensions, it can't currently be exposed under DX because it doesn't conform to normal DX specifications.
There is no single "right" way of doing anything in 3d.
If your HW doesn't meet the specs, then it's wrong. Make sense?
If nVidia is supporting HDR the OpenGL way then I'd suggest that way has a certain credibility to it.
The "OpenGL way"? What are you talking about? AFAIK, nvidia is exposing floating point buffers via their own extensions. That means you are restricted by the limitations of that extension, whatever they are.
This is simply more proof of microsoft doing everything they possibly can to screw nVidia over. What are "normal DX specifications" anyhow, compared to OpenGL specifications that carry the support of companies such as SGI, 3DLAbs etc?
DX9 specifies that floating point textures should support wrap address modes, which, apparently, nvidia's cards can't support. Who's problem is that?
This is the exact same issue as FP16 which has been used for years and years to produce professional work. If its good enough for professional use its certainly good enough for gaming.
Please tell us what applications use FP16. It's not an IEEE standard, for example.
DX9 wasn't released when R300 was out so the shipping drivers were DX8. All the operation are carried out internally at FP precision and converted at input and output to the relevant require precision.
Did I ever say anything about how the operations are carried out? R300 had an issue with certain older games, NOLF2 was proof of that and one of your ATi guys confirmed that they rewrote older DX support in this very forum. Truth hurts, doesn't it?
Where was this ever said? Link?
Its so obvious that Microsoft clearly favors ATi over nVidia with DX9 that it isn't even funny. And I wouldn't say R300 was designed for DX9, rather I would say DX9 was altered for R300.
Yeah, it's a big conspiracy. :rolleyes:
 
radar1200gs said:
radar1200gs wrote:
Did I ever say anything about how the operations are carried out? R300 had an issue with certain older games, NOLF2 was proof of that and one of your ATi guys confirmed that they rewrote older DX support in this very forum. Truth hurts, doesn't it?

Its so obvious that Microsoft clearly favors ATi over nVidia with DX9 that it isn't even funny. And I wouldn't say R300 was designed for DX9, rather I would say DX9 was altered for R300.

What are you smoking and where can I get some??
 
jpaana said:
Ok, I was talking about Cg compiler sucking compared to DX HLSL compiler as you were praising NVidia and Cg and characterising DX HLSL compiler as "rather poor".
I don't remember doing that in this thread.

I do remember saying that HLSL is designed rather poorly. Cg is designed a little bit better, but is still much more limited than GLSL's design. I wasn't talking about how the compiler optimizes: I was talking about the way it was put together.
 
radar1200gs said:
FP24 isn't an IEEE standard either...
and whats your point . Ms picked fp24 over fp16 though . Fp24 is a higher quality than fp16 and ati can actually run fp24 at reasonable rates . Nvidia can not use fp32 at reasonable rates . So it sucks for nvidia for not supporting fp24 and fp32. Because now they have a lower fp than ati while having games be playable
 
radar1200gs said:
FP24 isn't an IEEE standard either...
That's the best you can do for a comeback? :LOL: All those points I made or asked for more information on, and this is the reply I get?

I know FP24 is not an IEEE standard, but you didn't seem to know that FP16 wasn't.
 
OpenGLGuy:
I'll reply to your other points if and when I feel like doing so.

I never said FP16 was an IEEE standard or FP24 either, FP32 is the only IEEE backed standard out of them. Doesn't alter the fact that FP16 has had years of professional use one iota.

As for the HDR issue, where would the 3d industry be if there were only one way to impliment a z-buffer, or only one texture format? To only have one valid method of doing HDR in DX9 is ludicrous, just like only having one "valid" precision is ludicrous.
 
jvd said:
and whats your point . Ms picked fp24 over fp16 though .
MS picked FP24 and FP16. You are misguided as to how much of a benefit FP24 really is over FP16, as it seems most ATI supporters are.

Higher than FP16 is required for non-color data. High dynamic range color data will not exhaust the dynamic range of FP16. FP24 is a bare minimum for texture addressing (it may not be quite enough for proper texture addressing). Why use higher precision when it's not needed if you can gain performance from using a lower precision?
 
If your HW doesn't meet the specs, then it's wrong. Make sense?

Who makes the specs? And they are automagically right? Infallible? Don't mean to be harsh, but I suppose I'm just not very good at expressing myself.
 
Chalnoth said:
Why use higher precision when it's not needed if you can gain performance from using a lower precision?

The same question applies to Nvidia's use of 32 bit over 24 bit doesn't it?

Why does Nvidia have slower 32 bit when they only "needed" 24 bit? They would have been faster, within the spec, and not have to deal with the issues FP16 gives them if they had gone with the spec.
 
radar1200gs said:
I never said FP16 was an IEEE standard or FP24 either, FP32 is the only IEEE backed standard out of them. Doesn't alter the fact that FP16 has had years of professional use one iota.
Where? What applications? In any event, the point is moot for this discussion. Fp64 is used for many applications too, again, not relevant here.
As for the HDR issue, where would the 3d industry be if there were only one way to impliment a z-buffer, or only one texture format?
There are still specifications for these things, that is the point.
To only have one valid method of doing HDR in DX9 is ludicrous, just like only having one "valid" precision is ludicrous.
Again, what are you talking about? There are many ways of doing HDR. Float buffers is just one of them. HDR is not a part of the API spec (I see no "HDR" cap bit). HDR is an application feature.

Now, floating point buffers/textures are a part of the DX9 spec. The spec says that you need certain functionality with these textures (i.e. wrapping) but that other features are optional (i.e. filtering). What is the problem here? ATI had the spec. nvidia had the spec. S3 had the spec. XGI had the spec. If you don't follow the spec, who's to blame? Shouldn't your design be flexible enough to at least meet the minimum requirements? DX9 is not OpenGL where you can just support any old feature via extensions. DX9 attempts to keep a uniform field so that an application looking for a certain feature (i.e. float buffers) will get the same behavior on any card supporting said feature. OpenGL allows you more flexibility because you can create an extension for anything you like. The burden then falls on the application to adapt to differing feature support across different platforms, often a difficult task.

Lastly, DX9 does not specify "one" precision. It specifies a minimum precision and that is FP24. If you support more than FP24, good for you. If you happen to have poor performance doing so, who's fault is that?
 
Back
Top