How accurate is/should 3D hardware be?

Well - as I was suggesting we first need to eliminate rounding and precision errors. If i have sqrt(2) I want it to be sqrt(2) after various math ops and not an approximation of that value.

As I said the need for different approaches in hardware and low-level software is necessary to get the accuracy needed. I would love to have a fast solution where I don't have to think everytime about precision losses.

Before that we will see many workarounds and a lot of programmers fighting with rounding errors. HDR and floating point textures are nice for now, but they don't solve the essential problem.

Probably I'm wrong and it will all be good and shiny with (what?) 1024 bit internal precision!?

Just a few words about triangles being used in today's cg:
Triangles are _SO_ old and I can't believe that until now noone came up with a better (faster, more precise, more natural, easy to use, ...) idea. I know nvidia tried to use quads as primitives (what a bad idea!), but there ain't any totally different approaches. No - don't bring up voxels - they are still too expensive and haven't shown to be more realistic then polys and textures. Besides that the real problems with voxels IMHO is to display them on a flat surface aka as monitor.
 
Mordenkainen said:
Sorry, I misunderstood you then. OTOH, any particular reason you have in mind on why the hardware/driver should override what the developer wants?
Hmm, looks like you don't understand my post.

I want the opposite.
 
How accurate would your results be with FP math on a Mac using the GCC compared to an Intel platform using a MS compiler?
 
nelg said:
How accurate would your results be with FP math on a Mac using the GCC compared to an Intel platform using a MS compiler?

Depends on language, code, and compiler. For example "long double" on GCC gets mapped to 80-bit Intel FP registers, and may get mapped to PPC's 128-bit FP format. On some versions of MS's compiler, it maps to 80-bit Intel, and on others, it maps to 64-bit.

In Java, FP arithmetic gets you 80-bits registers on Intel, and 128-bits on PowerPC, unless you use the "strictfp" qualifier on class/method declarations, in which case, you get standard IEEE FP32/64 semantics for float/double register calculations.

However, strictfp on Java dramatically reduces performance of generated code, so obviously, the relaxed FP rules allows the optimizer more choices on Intel.

But it just goes to show how internal precision can cause different results and how requiring identical intermediate precision can sabotage hardware implementation flexibility.
 
ohNe22 said:
Well - as I was suggesting we first need to eliminate rounding and precision errors. If i have sqrt(2) I want it to be sqrt(2) after various math ops and not an approximation of that value.
You can get sqrt(2) +/- about 0.5 units of least precision - that's the best we can give you. Given that sqrt(2) is not a rational number, no fixed-point or floating-point format will ever give you exactly sqrt(2). As such, you can also never rely on sqrt(2)*sqrt(2) == 2 in floating-point arithmetic, regardless of precision.

Also, the more accurate you want your answer, the more transistors, more power, more clock cycles etc need to be thrown at the problem. For example, doing 1/x exactly according to IEEE754 spec (single or double precision) is about 2-4x slower and more expensive in hardware than doing an approximation that sometimes gives errors of +/- 1 ulp.

Rather than GPUs specifying exact operation, there is actually a trend where CPUs allow you to give up exact operation for speed - the SSE instruction set extension defines instructions such as 'RCPPS' and 'RSQRTPS', which do ultra-fast 1/x and 1/sqrt(x) calculations with the caveat that their operation isn't exactly defined, but only defined to within an error bound of about 1 in 3000.
 
@arjan de lumes: That's not the point!
I know that in floating-point arithmetic I cannot get better approximations without spending more time / more transistors. Therefore I was talking about numeric approaches where sqrt(2)*sqrt(2) == 2 is always true - even if the math is more complex and / or consist of a lot more operations.
Numeric solutions won't give you precision errors (well - at least not in theory) because the operations are processed before a result will be printed / used. So you only get the floating-point approximation at the very end of the whole process.
 
ohNe22 said:
Just a few words about triangles being used in today's cg:
Triangles are _SO_ old and I can't believe that until now noone came up with a better (faster, more precise, more natural, easy to use, ...) idea. I know nvidia tried to use quads as primitives (what a bad idea!), but there ain't any totally different approaches. No - don't bring up voxels - they are still too expensive and haven't shown to be more realistic then polys and textures. Besides that the real problems with voxels IMHO is to display them on a flat surface aka as monitor.
Curved surfaces, why not?
Destructable Environments...The next Level? #25
 
ohNe22 said:
... I was talking about numeric approaches where sqrt(2)*sqrt(2) == 2 is always true ...
IMO, an algebraic number cannot be really represented numerically. It can be represented by the integer coefficients of a polynomial *, but then you have to deal with variable length arrays, greatest common divisor finding (to "normalize" the coefficients), etc. So it's not a real solution. :)

EDIT: * and an additional number, if the polynomial has multiple roots
 
ohNe22 said:
@arjan de lumes: That's not the point!
I know that in floating-point arithmetic I cannot get better approximations without spending more time / more transistors. Therefore I was talking about numeric approaches where sqrt(2)*sqrt(2) == 2 is always true - even if the math is more complex and / or consist of a lot more operations.
Are you sure you really mean "numeric" and not "algebraic"? You aren't going to get "Maple" or "Mathematica" implemented in silicon (and certainly not in graphics chips!) for the foreseeable future.
 
Simon F said:
Are you sure you really mean "numeric" and not "algebraic"? You aren't going to get "Maple" or "Mathematica" implemented in silicon (and certainly not in graphics chips!) for the foreseeable future.

Maybe you're right ... I'm german so english isn't my native language ;-) and so I'm probably not using the correct word.
But I think it is clear that I was talking about representation of numbers in general. And floating point in today's cpus/gpus isn't the best (or last) way to go.

/Edit
One word about voxlab: IIRC they use voxels internally but send triangles to the gpu where everything is rendered - so I woudn't call it a "true"/"complete" voxel engine. "True" voxel based rendering (and physics) can only be done in software today. There exist different algos that do a good job but can't be called fast.

The "higher order surfaces" thread was a nice read but it shows the same problem as with voxels. Internally you can use all this amazing and cool stuff but when it comes down to the rendering/rasterization process everything is tesselated to triangles or polygons.
 
Last edited by a moderator:
ohNe22 said:
The "higher order surfaces" thread was a nice read but it shows the same problem as with voxels. Internally you can use all this amazing and cool stuff but when it comes down to the rendering/rasterization process everything is tesselated to triangles or polygons.
That doesn't matter. You can always tessellate to an appropriate level so that the surface looks smooth.

As I've said in the past, even if you stayed with the polynomial representation and solved for the pixel/ray-surface intersections, you'd have to use something like the Newton-Rhapson method, in which case you approximate the curve by small planar sections anyway!
 
Last edited by a moderator:
ohNe22 said:
... One word about voxlab: IIRC they use voxels internally but send triangles to the gpu where everything is rendered - so I woudn't call it a "true"/"complete" voxel engine. "True" voxel based rendering (and physics) can only be done in software today. ...

BTW it's called Voxlap, and it's a pure SW voxel renderer. So, I think, you should take a look at it again. :)
 
Mate Kovacs said:
BTW it's called Voxlap, and it's a pure SW voxel renderer. So, I think, you should take a look at it again. :)

Right - I had in mind that Voxlap (bad "b" ;-) ) uses DirectX and I assumed that they use Direct3D for the rendering part. But they only use DirectDraw, DirectSound and DirectInput if I understand it correctly.
So I have to agree that this thing runs very nice. I saw this stuff a year or so ago running smoothly on a midrange computer - will try it today at home (I have "only" a mac here at work). Perhaps I will play around with the code if I have enough spare time :)
 
Back
Top