Two Questions that are bothering me

Entropy said:
Honestly, are there cases where 32-bit floats would cause unnatural behaviour in a games setting?

Entropy

Not entirely certain...but I do know that the most noticeable problem with using 32-bit floats would be with collision detection and deflections.

Even if the path is parabolic, all trajectories in-game are generally iterative, since you need the object's position every frame. It is also almost always possible to center mathematical errors about zero that are resultant from the iterative method (iterative methods are only 100% accurate if the time step is zero...but since it has to be nonzero for realistic calculations, there will usually be errors....). I'm not certain it is possible to center precision errors about zero. I'm reasonly sure that such errors are always additive, meaning that if you do 100 iterations, you'll get twice the error of 50 iterations. Centered errors drop off much more quickly, and I'm pretty sure that they go up with the square root of the number of iterations. Thus, 100 iterations will have twice the error of 25 iterations with centered errors.

As a side note, it would be really nice if 3D hardware started working to center the errors about zero (errors both add and subtract, with equal probability).

Of course, since I haven't actually done game engine design, I can't be absolutely certain whether or not 32-bit is enough. I'm just going by my experience with rather simple physics simulations I've done in classes so far. One thing I am pretty sure of, though, is that the higher your framerate, the higher-precision physics calculations you'd better be using.
 
Here's a good demo of well worked out physics (it's the rigid body demo). It quite fascinating to play with and very realistic.

One good simulation that's easy to implement is the cart and pendulum simulation which uses forward and backward momentum combined with and approximation of gravity in action. A good description is here. The maths can be a little heavy but it's possible to get a good run with just a nudge controller. We had to write a neural network and fuzzy logic implementation at Uni. Also black box reinforcement learning works well here.

Sorry, I seem to have slipped slightly off topic there :)
 
Chalnoth said:
Kristof said:
Developers can help out here by selecting better alpha test reference values, by using filtering you can create a soft edged effect on punch through textures. Punch through is 0 or 1, visible or invisible, using filtering you can generate inbetween values that create a soft gradient from visible to invisible. This can act as AA but requires developer and artist work to avoid weird artefacts.

K~

It's nowhere near that complicated. All you need to do is change from an alpha test to an alpha blend and select, of course, the proper blend function.

Yeah, but what happens when you get close to the object? You can't possibly use textures that are so high res that you never get close enough for this to be a problem, especially when you are doing things like walking through grass. Leaves and grass look much better with a compare function than a fuzzy, transparent border, because they actually look like round polygon edges. Just look at the Nature demo in 3DM2K1.

I suppose you could say that this can be solved by using lots of polygons, but that's very unreasonable. Even if GPU's do get that powerful, the geometry bandwidth will be insane.

It is complicated if you want to do it properly. Using shaders you could switch from a compare to a blend with a steep slope linear function (one of the GDC presentations talk about this). You could then adjust this slope with distance. But, as Kristof said, its a lot of work.

With more complicated shaders, you need more complicated solutions.

FAA and MSAA are just interim solutions until hardware gets faster. They will have very short life spans in the grand scheme of things. A few years ago, MSAA implementation was too expensive. A few years from now, SSAA will still reduce framerate a lot, but it will already be high enough that is doesn't matter. Or, in the simple blending cases where MSAA does work, SSAA will give more than sufficient performance.

As for alpha textures, it might be possible, but I don't see the point. It's a lot of work for a technique that apparently isn't going to be used in future games often, if at all.

Are you saying alpha texture aren't going to be used? Why not? People love alpha effects from weapons, explosions, glass, etc, so I don't see them going away. Even JC was talking about how some hardware doesn't target single texture blending enough, concentrating too much on multitexture performance.
 
Mintmaster said:
Yeah, but what happens when you get close to the object? You can't possibly use textures that are so high res that you never get close enough for this to be a problem, especially when you are doing things like walking through grass. Leaves and grass look much better with a compare function than a fuzzy, transparent border, because they actually look like round polygon edges. Just look at the Nature demo in 3DM2K1.

It doesn't look that bad when you're very close. Since you're doing nothing other than applying bilinear filtering to the image, there is no more blurring there than you're used to seeing when look at a texture close to a wall.

And if you doubt that it's possible to have high enough resolution that you won't realistically get too close to start seeing this artifact, you haven't played UT with S3TC textures enabled.

I suppose you could say that this can be solved by using lots of polygons, but that's very unreasonable. Even if GPU's do get that powerful, the geometry bandwidth will be insane.

Yes, it will be a long while before alpha textures are done away with in lieu of polys.

FAA and MSAA are just interim solutions until hardware gets faster. They will have very short life spans in the grand scheme of things. A few years ago, MSAA implementation was too expensive. A few years from now, SSAA will still reduce framerate a lot, but it will already be high enough that is doesn't matter. Or, in the simple blending cases where MSAA does work, SSAA will give more than sufficient performance.

No, they aren't short-term solutions, because the quality level will continue to increase. Into the future, we may have a choice between, say, 16 MSAA and 4x SSAA. Regardless of how fast future hardware gets, MSAA will always be able to produce superior edge AA quality at a given performance level, and MSAA + aniso will always produce superior overall quality at a given performance level.

Are you saying alpha texture aren't going to be used? Why not? People love alpha effects from weapons, explosions, glass, etc, so I don't see them going away. Even JC was talking about how some hardware doesn't target single texture blending enough, concentrating too much on multitexture performance.

Weapons? Not sure what you mean there, but glass, explosions, and the like are all transparent objects where blending should be used anyway.

Also, the JC comment was, I believe, based on his use of shadowing, where improved single-texture fillrates would significantly speed up the shadow calculation times.
 
I don't think we'll ever see a physics engine on 3d cards. Or if its there, it will be limited to extremely simple cases such that there is some use for the information for the vertex and pixel shaders(such as generating momentum values for elastic collisions) (certain simple collisions in general actually shouldn't be too hard).

However, the full gamut of physics needed is not highly parrelel (in fact its pathologically serial), needs a full host of mathematical functions and doesn't necessarily follow simple logic functions.

Tensor algebra is often needed (so you're vector space optimizations go out the window). Partial differential eqn solvers are needed (which type depends on which problem). Integrators, special functions, how to deal with fluid mechanics, etc etc

Either way, you end up needing something that looks like a cpu. For instance. In the orbit problem. No one actually writes out the force eqns by hand. Instead they use many body approximations (such as using potentials, sum over histories, etc etc). The math involved is considerably different, and NOT necessarily reducable.

As far as Matlab is concerned. It is fast for what its designed to do, but if you do an instruction count, multiply it by all the seperate state and objects that need to be counted for a 3d scene, it will look like nearly a worst case scenario than if you simply did it out at a lower lvl of abstraction.
 
An independent physics processor sounds to me like it might help console systems more than regular PC's.
 
Fred said:
Tensor algebra is often needed (so you're vector space optimizations go out the window). Partial differential eqn solvers are needed (which type depends on which problem). Integrators, special functions, how to deal with fluid mechanics, etc etc

One thing: I'm not sure that tensor algebra cannot be done with vector-space calculations. After all, the two tensors that are most common, first and second rank tensors, can be represented as vectors and matrices. While it is true that higher-rank tensors are used in some situations, I doubt games will require things like working with stress-strain relations for non-linear, non-isotropic objects...

That, and most tensor operations that I've seen could possibly be done by building the tensors out of a serious of vectors, and using a series of dot products to perform the multiplication.

Regardless, I do think that the best thing for Physics processing in-game would probably just be a CPU with exceptional floating-point power. I don't think it's too much to ask for CPUs to handle scene management, AI, physics, and other miscellaneous game processing needs. All that we need is for game developers to increase the min spec required for their games on the CPU side so that they don't hesitate to use up more CPU power.
 
Chalnoth said:
No, they aren't short-term solutions, because the quality level will continue to increase. Into the future, we may have a choice between, say, 16 MSAA and 4x SSAA. Regardless of how fast future hardware gets, MSAA will always be able to produce superior edge AA quality at a given performance level, and MSAA + aniso will always produce superior overall quality at a given performance level.

Just like 16bit color and 32bit color? I think the analogy applies.
 
Colourless said:
darkblu I believe Chalnoth is talking about the the impossible, or very hard case.

That is at least it's impossible to sort intersecting triangles without spittling them. If you use alpha keying with alpha blending and depth buffering to produce a cut out you will at least get a somewhat correct looking image if you don't split. All the alpha keying does is prevent the depth buffering being updated with the depth of pixels that are completely transparent.

-Colourless

yes, colourless, i'm aware that when it comes to intersections, the combination of alpha-keying + alpha-blending + depth buffering would produce relatively more visually-correct results, when taken in the cotext of a single frame. but when you consider multiple successive frames it also takes for some frame-by-frame coherence in the drawing order of those intersecting object for that premise to hold, otherwise across-frames blend-flipping would happen (as in general you have no guarantee that objA will always be drawn prior to objB when arbitrary fetching them from the database).
anyhow, my sole point was that the alpha-keying + alpha-blending + depth buffering does not allow the engine to escape the depth sorting of blendables altogether, as one'd want to have as correct blend result as possible from those pixels which passed the alpha test and got blended. but let's drop the topic already, it's boring.
 
Back
Top