Is partial precision dead?

nelg said:
Is partial precision dead?

I wish it was, but probably not. I blame it solely on the NV3x series. Programmers really shouldn't have to worry about it, but then again, they do. I wouldn't be surprised at some point NVidia rids of "FP16" support in their hardware and make it fully FP32 simply because of their transistor budget (for backwards compatibility sake, they will run their shaders very much like ATI by ignoring PP)... but I could be wrong.

Do any recently released games still use it?

I'm pretty sure, but I'm not qualified to answer it.
 
Those cards are probably not fast ehough to play games that require DX9 shaders at a decent frame rate anyway.

Most of them will have to play the game in Dx8 mode or wont play it at all.
 
Jawed said:
Still plenty of PP in Futuremark's 3DMk06 :!:
And if you disable PP for the 7800 you'd see a significant performance delta, not to mention image quality improvements.

-FUDie
 
There is ~ 4.5% loss in SM 2.0/SM 3.0 scores with full precision and an average of 0.64 fps loss in the game tests. (max loss is 0.92 fps)
As for IQ, I've asked the author of the article and he said that he didn't see any obvious difference (using the GeForce 7800 board).
 
Last edited by a moderator:
From a D3D point of view, the D3D9 spec was a lot less rigid than D3D10's - all of the precisions and characteristics are really well defined in the specs. So PP could well be less of a thing in the future.

Although, there will still be the FP16 formats around - so there will be some PP mathematics going on. Sometimes it's just not necessary to use full FP32 - and the 50% memory saving is quite tempting ;)

Jack
 
Partial Precision (half is alot easier to write you know...) is alot faster on G70 fragment shaders... Using halfs double the size of register pool, the effect of that is allowing upto twice as many fragments to be in flight at once.

Its still the first peice of advise NVIDIA give about writing fragment shaders.
 
ZioniX said:
As for IQ, I've asked the author of the article and he said that he didn't see any obvious difference (using the GeForce 7800 board).
There isn't really - I've taken 3 random samples from each test, with and without pp hints and compared them using ATI's Compressonator. It's only when you highlight the differences by factors such as 400% that you begin to notice them.
 
Why do I sense a specific "allergy" against split precision lately? There are still quite a few cases where FP16 can be adequate and as long as it saves resources it can have a reason to survive even in the future.

Heck I'd love to see on the other hand trilinear TMUs too, but when someone says that still a lot of textures don't need more than bilinear, then I can also understand the redundancy involved for the added hardware and bandwidth cost.

Why not the same allergies for things like INT10 HDR vs. FP16 HDR on the other hand? If the result is indistinguishable between the two in real time ask me if I personally care.
 
Lately? :LOL: Because that kind of very basic function, when it isn't dev transparent, is not a good thing. If the system is fast enuf to not need it, then beautiful --but then the discussion is pointless anyway. One presumes that if NV is putting it at the top of the list in recs that this is not quite true yet. If the system really needs it, then it creates work that could better go elsewhere. And if the system *really* needs it, then it creates pressure to make IQ tradeoffs.

I frankly don't understand the HDR implementation options well enough yet to say that about it. . .but maybe I would if I did. And HDR is usually an option for the gamer to turn on or off in his app control panel --shaders aren't.

I tried to find out in a thread some time ago if this would be amenable to automation (since this would remove my principal objection --I had the vain hope that CG could be made to reliably do so, accurately predicting where IQ differences would result and leaving those shaders full precision), and was assured by one of our devs (tho I'm forgetting which at the moment) that it just isn't --and that while doing each one isn't much effort, analyzing all of the instances in a game to see where to use it is.

Edit: Btw, do we have any instances to point at (screenies, preferably) showing the visual differences between the HDR implentations you're pointing at? Thousand words and all that. . .
 
Last edited by a moderator:
Ailuros said:
Why do I sense a specific "allergy" against split precision lately? There are still quite a few cases where FP16 can be adequate and as long as it saves resources it can have a reason to survive even in the future.
Is that a rhetorical question?
I mean you know "why" some folks pike on the partial precision.
If a corporate "X" says, for instance, that "correct" HDR effects can only be obtained by the method they implemented in their current hardware, all the "X" corporate FUD loving fans will self righteously spew the marketing babble as if they were expert in the field.

That's the name of the game.

geo said:
Because that kind of very basic function, when it isn't dev transparent, is not a good thing.
Why?
Going from full to partial precision for a shader that can fit in a FP16 shader is really trivial for the developers.

As Ailuros said, it's not about asking devs to have two path for every shaders, like in the NV30 days, but just to support FP16 when it makes sense to.
geo said:
I frankly don't understand the HDR implementation options well enough yet to say that about it. . .but maybe I would if I did. And HDR is usually an option for the gamer to turn on or off in his app control panel
Err... Explain, Geo!
 
Ailuros said:
Why do I sense a specific "allergy" against split precision lately? There are still quite a few cases where FP16 can be adequate and as long as it saves resources it can have a reason to survive even in the future.
I get the impression that Xenos has given us a hint as to MS's thoughts about the longevity of it all.
 
Ailuros said:
Why do I sense a specific "allergy" against split precision lately? There are still quite a few cases where FP16 can be adequate and as long as it saves resources it can have a reason to survive even in the future.

It's probably because the NV3x series left a sour taste after its introduction... if it weren't so bad, it probably would have had a better reception..

It shouldn't have been like "it's going to kill performance if it is used extensively"... instead of "use full FP where appropriate". If the performance differential was acceptable, then it wouldn't have mattered as much. However, it wasn't and it will probably be referred to like that for a while.
 
http://www.beyond3d.com/forum/showpost.php?p=683577&postcount=616

Here Nick kindly identified a 3DMk06 shader that uses dynamic branching.

Additionally, sections of the code use partial precision (_PP at the end of the instruction name).

What's interesting, in my view, is:
  • clearly this is an optimisation for NVidia hardware - and who knows whether the driver will even respect this and instead just change every instruction to _PP
  • the complexity in identifying the portions of shader code that can withstand _PP
To be honest, if you're a half-decent programmer, then an awareness of precision is right there at the time the algorithm is cast and the data structures formed, so I don't think it's actually that difficult to say these "variables are _PP". It's not much different from choosing texture formats, I guess.

I can imagine the lead graphics engine developer will have laid-down the parameters for _PP, too - so the code monkeys can get on without having to think too much.

I suppose where it might get interesting is in artist-configured shaders - but again I imagine the lead dev will have identified the conditions under which precision is at risk.

Arguably, I suppose, it would be nice as a dev to be able to think "FP32 all the way, no need to worry about the precision of my variables" - like when I used to program finance stuff I knew my money fields were always 15.4 and life was simple, I just had to be careful about when I rounded, etc. etc.

Jawed
 
Vysez said:
Why?
Going from full to partial precision for a shader that can fit in a FP16 shader is really trivial for the developers.


You're going to make me find that thread, aren't you? :LOL:

As Ailuros said, it's not about asking devs to have two path for every shaders, like in the NV30 days, but just to support FP16 when it makes sense to.

Huh? Why would they need two paths? ATI hardware just ignores _pp, doesn't it?

Err... Explain, Geo!

What am I explaining again? That shaders are a basic part of modern games and HDR is still at the optional stage? How many games have HDR? What percentage of gamers have it turned on when they play those games (I do tho).
 
Dave Baumann said:
I get the impression that Xenos has given us a hint as to MS's thoughts about the longevity of it all.

Not long at all apparently.


Jawed said:
What's interesting, in my view, is:

* clearly this is an optimisation for NVidia hardware - and who knows whether the driver will even respect this and instead just change every instruction to _PP
* the complexity in identifying the portions of shader code that can withstand _PP

Well.. I presume most people associate PP with NVidia..
 
Back
Top