How accurate is/should 3D hardware be?

Reverend

Banned
A long time ago, in a galaxy far, far away, I posted the following (I was thinking about this recently, did some searching here and found exactly what I wanted to ask in an old kinda-heated thread) :
Again it all comes down to whether you see 3D hardware as a deterministic computational device which produces well-defined output for any input, or it's just some black box that you feed polygons into to produce some sort of random approximation of your scene.
For some time, back in the old days, the latter case happened quite frequently, mostly due to driver characteristics as well as intrincsic hardware characteristics.

The assumption here is that programmers (of, usually, games, since that's the most profitable industry to talk about for the IHVs) are quite capable fellas.

Should 3D hardware be altruistic, leaving the output spewed by hardware equal to what's it's being instructed to do, or should hardware engineers (and their driver engineer colleagues) attempt to be "teachers"?
 
Last edited:
Reverend, to produces a well-defined output we need a definition first. Before we don't have such a definition it is impossible to build accurate 3D hardware.
 
It all comes down to how much abstraction you want. One could write a 3D API that not only specifies what must be done, but exactly how it is done, down to the level of implementation in HW, and performance levels, but then one would end up with a console.

If you want to allow competitive differentiation, you've got to permit differences in output. Now, you can always specify things like minimal precision, et al, on internal computations, but that doesn't mean one won't get different results. For example, floating point is not associative: (a+b) + c != a + (b+c), so even if the spec says "you must evaluate F(u,v) at FP32" it may or may not impose strict guidelines on how F(u,v) is evaluated.

The 3D pipeline is so complex, I think guaranteeing bit-for-bit level accuracy in rendering is a pipedream.
 
DemoCoder said:
For example, floating point is not associative: (a+b) + c != a + (b+c),

:oops: Such scanty math background as I have just crumbled to dust with that prop knocked out.
 
DemoCoder said:
For example, floating point is not associative: (a+b) + c != a + (b+c),


geo said:
:oops: Such scanty math background as I have just crumbled to dust with that prop knocked out.
Just to save confusion, there are a lot of cases where that will hold, but also others where it doesn't. For example a=2^32 b=-2^32 and c = 2^-10 shows why it doesn't hold.
 
  • Like
Reactions: Geo
Add this to DC's post and you have your answer.

Entropy said:
A small comment from the scientific computing field.

Code that critically depend on the minutiae that Reverend brings up is effectively broken. You should never, ever write anything which makes those kinds of assumtions.

Assuming rounded rather than truncated results is pretty much as far as you can hope for. If you _need_ control, you should explicitly code for it, never leave it to the system to take care of for you.

Now, in scientific computing, codes tend to have very long life and get ported all over the place, and is thus probably a worst case, but generally the experience should carry over.
 
DemoCoder said:
One could write a 3D API that not only specifies what must be done, but exactly how it is done, down to the level of implementation in HW, and performance levels, but then one would end up with a console.
And the PC would then effectively be as pleasant to everyone, as a console? What's wrong with that? It would then change the PC 3D landscape.

What you're saying is that the 3D APIs really do nothing other than to specify general rules while allowing hardware (and hence IHVs) to interpret their version of "the truth".

We want to cut the costs of making games and getting games out quickly (both being related to each other). *That* should be the ultimate aim of an API, not to define "flexible rules".

Imagine a programmer knowing that his (a+b)+c results in identical outputs for all hardware meeting a stringet set of API rules.
 
It would then change the PC 3D landscape.

Yes, by destroying it via commodification. If you produce a standard that leaves no room for variation, you eventually end up with exactly that: no variation. Instead, IHVs would be left with nothing more than the ability to compete on price, and that ultimately leads to commodification.

What you're saying is that the 3D APIs really do nothing other than to specify general rules while allowing hardware (and hence IHVs) to interpret their version of "the truth".

We don't need this kind of hyperbole, but to use your analogy, what I'm saying is, the 3D API defines at a high level, what the output should be, but does not specify the exactly step by step process by which the output is generated.

The only caveat, is that expecting bit-for-bit "truth" in something as complex as a 3D pipeline would require an inordinate amount of assertions about how things get computed, and end up restricting implementation flexibility.

Let's take for example, anti-aliasing. How would argue that the final color of a pixel be decided? Would you mandate AA, mandate exactly how many samples are to be taken, mandate the distribution of the samples, mandate how they are downsampled, et al?

Unless you do so, you could never achieve the "bit-for-bit" truth value you are hoping before, where if a scene is rendered on GPU A and GPU B, a comparison of the framebuffer outputs produces equality.

But if you did produce a 3D API that strictly mandated the AA algorithm to achieve pixel-identical results, you effectively kill innovation in AA, and probably kill any incentive for IHVs to invest significant resources in improving it beyond just meeting their spec, and reducing costs.
 
DemoCoder said:
<a lot of Duh! stuff>
I wasn't thinking of an API that should define strict rules about stuff a programmer can't control given what we know are *more* important to programmers. Which isn't AA, yes? Why is this the case? Why haven't the IHVs spent the silicon "necessary" in their next-gen chip for "better" (if they can come up with it... which ain't so hard... we just need mnore samples, simpe as it is) AA thus far?

Why do we think we have X number of rules about AA in an API's spec when we have X+20 rules about floating point?

Since you (and probably I) have diverted from the main theme of this thread of mine, what *is* most important about PC 3D graphics right now? IOW, what's wrong with it?
 
Last edited:
My point is, two different architectures are not going to produce bit-for-bit identical results, and if you create a spec that legislates this, I think it will be so onerous as to damage competitiveness. Right now, NV and ATI are fiddling around with AF, AA, ALUs, code evaluation order (both driver and HW), the result is near perceptual equivalence, and that's what we need, because GPUs purpose is to render visuals, not compute your bank account.

If I were to say what was lacking, renderwise, I'd say much higher levels of AA and real temporal AA. Algorithmically, what's lacking is global illumination and I don't see GPUs accelerating GI algorithms very well right now. That is the major reason why things look fake. Stuff is either shiny, or its fake diffuse with soft shadows. People are going googley-eyed over ATI's Toy Shop, but I felt it had that fake look as much as any other 3D demo, in comparison to film CGI.

That's why I think we'll start to see image based lighting techniques in games, as a replacement/addin for the radiosity technique you see used today for static geometry.
 
Last edited by a moderator:
Well I don't think that any of today's hardware or software is really "exact" or "accurate". This won't change at all in future (even with more precise solutions).

A good idea (which is made in software at the moment) are numeric and therefore absolutely accurate approaches such as exacus for example:

http://www.mpi-sb.mpg.de/projects/EXACUS/

It would be nice to see hardware-support for those techniques to speed things up. For now all numeric approaches just aren't high-performance solutions and totally inadequate for (pc) graphics.
 
This thread is obviously aiming more in the department of precision from my understanding than anything else but I'd like to make another small parenthesis here regarding anti-aliasing in general. Since most recent accelerators are optimized for multisampling and thus polygon edge/intersections AA, I don't necessarily see a higher sample density being an absolute priority over more advanced filtering algorithms and less filtering related optimizations in drivers.

As long as the amount of polygon interior data is vastly higher than the amount of polygon edge/intersections data, I don't need 8x, 16x, 32x or more MSAA (albeit naturally always nice to have); I'd want first and above all to see times better texture filtering. With the resolutions we can use today, MSAA is the smallest concern. Of course could there be Supersampling based alternative future methods, but I'd still like to know where we'd get both the framebuffer and bandwidth resoruces from especially with constantly increasing shader-amounts and/or float HDR.

As for floating point operations in general I guess that GPUs and their pipeline need to get flexible enough as they were with fixed function ops; while it was/is possible to use a high number of fixed function calls in parallel and still add AA/AF to the mix, today's GPUs are still limited to one or maybe floating point ops per clock and adding AA/AF (wherever possible) does make the going a lot tougher.

All IMHLO of course as always ;)
 
As long as hardware is still based on the triangle primitive and it still uses images it will never be accurate. Reality doesn't cut corners (though human vision might) and everything would be have to be represented at the atomic level with vertices and light equations with incredibly ridiculous levels of precision to recreate surfaces and "textures" and even then it would never be accurate.
 
Mordenkainen said:
As long as hardware is still based on the triangle primitive and it still uses images it will never be accurate. Reality doesn't cut corners (though human vision might) and everything would be have to be represented at the atomic level with vertices and light equations with incredibly ridiculous levels of precision to recreate surfaces and "textures" and even then it would never be accurate.
I don't think it would ever need to go to such a level of detail to convince somebody that what you're looking at is real - after all, we do it every time we look at a nature program on a very pixellated television. You can get by with all kinds of short-cuts to fool the brain into thinking it's looking at something it isn't; take a mp3 file for example - it's missing various parts of the original sound waveforms but (at the appropriate bitrate of course) one doesn't notice this.
 
Neeyik said:
I don't think it would ever need to go to such a level of detail to convince somebody that what you're looking at is real - after all, we do it every time we look at a nature program on a very pixellated television. You can get by with all kinds of short-cuts to fool the brain into thinking it's looking at something it isn't; take a mp3 file for example - it's missing various parts of the original sound waveforms but (at the appropriate bitrate of course) one doesn't notice this.

That's exactly my point though I could have made it clearer; that's where my "human vision cuts corners" comes from.
 
Mordenkainen said:
As long as hardware is still based on the triangle primitive and it still uses images it will never be accurate. Reality doesn't cut corners (though human vision might) and everything would be have to be represented at the atomic level with vertices and light equations with incredibly ridiculous levels of precision to recreate surfaces and "textures" and even then it would never be accurate.
Er, I wasn't talking about approaching realism. I'm talking about getting an output that I expect.
 
Reverend said:
Er, I wasn't talking about approaching realism. I'm talking about getting an output that I expect.

Sorry, I misunderstood you then. OTOH, any particular reason you have in mind on why the hardware/driver should override what the developer wants? It's bad enough when I have to debug my own code, let alone trying to find if there is an API/driver discrepancy. Even in cases of "brilinear" and using lower precision because output is the same the driver doesn't need to do anything behind the devs' backs: that's why IHVs have ISV relation teams. During development is where these issues need to be brought up and where IHVs should be "teachers". More than "altruistic" the hardware/driver platform needs to be predictable.

Older games that are no longer supported by the ISVs are a different story but these are usually already fast enough that any driver tinkering should concern itself with fixing bugs anyway.
 
Back
Top