Two Questions that are bothering me

#1. Regarding mechanical physics implementations in today's games. Would it be possible to create a physics engine that could run on its own hardware? If that were possible I think it would be a fabulous idea to create an OpenPhysics language that could be software run on anyones computer but has a speed increase if you have the physics hardware. Having the engine free would create a standard everyone could use, and hopefully once people start to use it in software(market saturation), the hardware could be sold to speed things up alittle or as a tradeoff create more complicated scenes.

Having a standard for physics could help with the development costs, and speed. I think this could also be nice for any industry that also would like to test out a product without building it.

#2. Alpha textures seem to be a problem with AA. Is there anyway that when the GPU is loading a texture is could see there are alpha bits in use, and maybe just alias that one texture when being rendered? From what I understand I think Matrox could implement this in software since their card is fill-rate limited right now and it would create in combination with FAA an even better looking scene.

Ok, I am glad I got those off my mind, I hope they are not too foolish as I really respect this forums opinions.
 
As for the physics simulations hardware, I'm not sure. What you'd need is a highly-programmable and very powerful floating-point processor. Since most basic physics math deals with vectors, it may be possible to convert a very powerful vertex shader unit for use with basic physics calculations for games.

I seriously doubt it would be economical for there to actually be yet another coprocessor for the task, however, and it may be best to just use double-precision SIMD available in modern processors (P4, and apparently the upcoming Hammer) for such calculations.

As for alpha textures, it might be possible, but I don't see the point. It's a lot of work for a technique that apparently isn't going to be used in future games often, if at all.
 
LittlePenny said:
#2. Alpha textures seem to be a problem with AA. Is there anyway that when the GPU is loading a texture is could see there are alpha bits in use, and maybe just alias that one texture when being rendered? From what I understand I think Matrox could implement this in software since their card is fill-rate limited right now and it would create in combination with FAA an even better looking scene.

This is only a problem on FAA and MSAA techniques, true SSAA techniques also oversample the texture and will AA the punch through edges.

Developers can help out here by selecting better alpha test reference values, by using filtering you can create a soft edged effect on punch through textures. Punch through is 0 or 1, visible or invisible, using filtering you can generate inbetween values that create a soft gradient from visible to invisible. This can act as AA but requires developer and artist work to avoid weird artefacts.

K~
 
Kristof said:
Developers can help out here by selecting better alpha test reference values, by using filtering you can create a soft edged effect on punch through textures. Punch through is 0 or 1, visible or invisible, using filtering you can generate inbetween values that create a soft gradient from visible to invisible. This can act as AA but requires developer and artist work to avoid weird artefacts.

K~

It's nowhere near that complicated. All you need to do is change from an alpha test to an alpha blend and select, of course, the proper blend function.

For some situations, this basically requires changing two lines of code (alpha func + alpha test enable) to two other lines (blend func + alpha blend enable).

Unfortunately, there is another stipulation, and that has to do with ordering. In order for alpha-blended textures to be displayed correctly, they must be displayed in back-to-front order. Fortunately, since alpha textures aren't used in highly-complex geometry, this isn't hard to do in most cases (In fact, UT displayed alpha textures perfectly with no extra coding...). Of course, you could possibly do both an alpha test and an alpha blend to make improperly-sorted textures look "okay" instead of "downright wrong."
 
Chalnoth said:
(In fact, UT displayed alpha textures perfectly with no extra coding...). Of course, you could possibly do both an alpha test and an alpha blend to make improperly-sorted textures look "okay" instead of "downright wrong."

you can't display semi-transparent geometry/textures "perfectly with no extra coding" on an IMR unless said geometry is handled in the bsp tree. unfortunately, when a room is full of gunfire (as a rule implemented with various alpha effects) the bsp has little help to offer with that - sorting of the projectile objects is imminent.

ps: how combining alpha-blending with alpha-keying can escape the need for proper transparent objects ordering?
 
darkblu said:
ps: how combining alpha-blending with alpha-keying can escape the need for proper transparent objects ordering?

Simple. Add in the alpha test and you'd get at least somewhat z-correct alpha-blended textures. This way you'd be able to see at least some of the objects that lie behind the alpha-blended texture that were drawn afterwards. It wouldn't look correct at the edge of the alpha texture where the blending was taking place, but those differences might not be noticeable if the texture pixels are always fairly small with respect to the screen.
 
A hardware physics engine would be a little exspensive, but not impossible. I would think you'd take a 128bit RISC chip and add a vector processor or two (that would be the exspensive bit) and a very large amount of onchip cache. Mathworks's MatLab contains all the bits and pieces you'd need to produce a damn fine physics model and it's fast. I seem to remember that somebody once produced a simple physics chip either on a GFX card or in a console based on a Hitatchi vector processor.
 
Just one little thing:

I don't think there'd be any point to using 128-bit floats (if that's what you meant by a 128-bit RISC chip, of course) for physics calcs in games. I don't doubt in the least that 128-bit floats could be useful in real-world physics simulations.
 
Chalnoth said:
Simple. Add in the alpha test and you'd get at least somewhat z-correct alpha-blended textures. This way you'd be able to see at least some of the objects that lie behind the alpha-blended texture that were drawn afterwards. It wouldn't look correct at the edge of the alpha texture where the blending was taking place, but those differences might not be noticeable if the texture pixels are always fairly small with respect to the screen.

the alpha-test would just discard some portions of the alpha-textured geometry. how would that sove the problem with those portions wich pass the test and need to be blended? (as you deliberately specified combining testing w/ blending)

alpha-keying would solve the issue IFF the texture is not intended to be blended. i.e. when anything which passes the alpha-keying test is treated as opaque (no blending)
 
Chalnoth said:
Just one little thing:

I don't think there'd be any point to using 128-bit floats (if that's what you meant by a 128-bit RISC chip, of course) for physics calcs in games. I don't doubt in the least that 128-bit floats could be useful in real-world physics simulations.

Um, yeah, I did mean 128bit floats. I just have a fixation on doing things right and I forget that game worlds only need simple physics models. Though it would be nice to find out what level of precison the D3 physics engine runs at internally, if their world is as interactive and responsive as they say.
 
darkblu said:
the alpha-test would just discard some portions of the alpha-textured geometry. how would that sove the problem with those portions wich pass the test and need to be blended? (as you deliberately specified combining testing w/ blending)

alpha-keying would solve the issue IFF the texture is not intended to be blended. i.e. when anything which passes the alpha-keying test is treated as opaque (no blending)

You can set the alpha test to be enabled at any alpha value you want. Thus, not drawing any pixel with an alpha of, say, less than 0.1 would still leave most of the blend.
 
BoardBonobo said:
Um, yeah, I did mean 128bit floats. I just have a fixation on doing things right and I forget that game worlds only need simple physics models. Though it would be nice to find out what level of precison the D3 physics engine runs at internally, if their world is as interactive and responsive as they say.

Since x86 processors don't natively support 128-bit floats, I seriously doubt that DOOM3 can use above 64-bit. From my own work, I don't think they could use 32-bit and have it still be remotely accurate.
 
Chalnoth said:
BoardBonobo said:
Um, yeah, I did mean 128bit floats. I just have a fixation on doing things right and I forget that game worlds only need simple physics models. Though it would be nice to find out what level of precison the D3 physics engine runs at internally, if their world is as interactive and responsive as they say.

Since x86 processors don't natively support 128-bit floats, I seriously doubt that DOOM3 can use above 64-bit. From my own work, I don't think they could use 32-bit and have it still be remotely accurate.

You can make your floating points precise to any number of bits in software regardless of what the hardware supports it just inccurs a slight overhead. So you could compute to 128bits precision and resample to the accuracy you will be using finally.

Though I guess in games you would use the fastest option available which would be the size supported by the hardware.
 
BoardBonobo said:
Um, yeah, I did mean 128bit floats. I just have a fixation on doing things right and I forget that game worlds only need simple physics models. Though it would be nice to find out what level of precison the D3 physics engine runs at internally, if their world is as interactive and responsive as they say.

I don't think that kind of precision would have any meaning. Our models of the world just isn't anywhere near precise enough to justify it. You are talking over 30 decimal digits of precision.
Example - throw a ball through the air. Simple parabolic function doesn't even cut it beyond the first digit, then you have to take the air resistance (eng?) into account. But a simple model for that isn't good for more than another one or two digits, then you have to take the surface into account, which we simply don't have a unified model for (tennis ball is different from a golf ball is different from a baseball). Of course, we've still neglected wind, atmospheric air pressure.....
32-bits should suffice just fine. Unless of course you use algorithms which aren't stable, or even divergent, but I can't really see that being an issue as long as we're dealing with calculating in-game classical physics. Does the gun bounce 2.11923 meters or 2.119234571223 meters? I don't know of ANY physics models that are accurate beyond the 15+ significant digits that 64-bits offer.

Not my area of expertise though, so please correct me if I'm wrong. Some algorithms can of course chew up precision real fast.

Though I guess in games you would use the fastest option available which would be the size supported by the hardware.

Sounds like a reasonable assumption. :)

Entropy

PS. In my field I've seen tons of completely useless research data being published by people who use calculational programs without understanding the underlying assumptions/precision in the model the calculational program uses. And it passes through peer review. The problem has grown as the calculational programs have become more accessible. Pet peeve, 'n all. ;)
 
Chalnoth said:
You can set the alpha test to be enabled at any alpha value you want. Thus, not drawing any pixel with an alpha of, say, less than 0.1 would still leave most of the blend.

and exactly this blending constitues the problem - you need to have sorted the geometry in order to get correct blending. what's so hard to understand here?

i repeat: alpha-keying in no way allows you to discard the sorting of semi-transparent geometry, unless you don't want to blend that geometry at all. i.e. when you treat the geometry pixels which have passed the alpha-keying test as being opaque.
 
darkblu I believe Chalnoth is talking about the the impossible, or very hard case.

That is at least it's impossible to sort intersecting triangles without spittling them. If you use alpha keying with alpha blending and depth buffering to produce a cut out you will at least get a somewhat correct looking image if you don't split. All the alpha keying does is prevent the depth buffering being updated with the depth of pixels that are completely transparent.

-Colourless
 
Entropy said:
I don't think that kind of precision would have any meaning. Our models of the world just isn't anywhere near precise enough to justify it. You are talking over 30 decimal digits of precision.

That kind of precision is required because whenever large numbers of iterative processes are used (iterative process example: Take a state, increase time by a tiny amount and recalculate. Repeat with the new state). When you calculate on the same data over and over and over again, it does become quite important to make sure that you aren't introducing errors into the calculations from math precision errors.

One example of a good simulation that needs highly-precise math calculations would be orbital simulations. If you're going to try to accurately simulate dozens, or even hundreds, of objects all attracting each other, you need all of the calculations to be absolutely accurate, or you're going to have a very hard time coming remotely close to the right answer.

From my experience with doing pure math calculations, it doesn't take much for the errors in 32-bit calcs to creep to around four or so decimal places. Four decimal places just isn't enough for most of the stuff I've worked on, but I suppose it might be okay for games.

Still, in the physics department at my school, 64-bit floats are used commonly.
 
Chalnoth said:
Entropy said:
I don't think that kind of precision would have any meaning. Our models of the world just isn't anywhere near precise enough to justify it. You are talking over 30 decimal digits of precision.

That kind of precision is required because whenever large numbers of iterative processes are used

I know, those problems were what I referred to as "algorithms" above. I simply didn't consider that calculating trajectories would become an iteration even though transfer of momentum and similar wouldn't. Brain lag. ;) Still, the error propagation should be benevolent, as far as I can see.

Honestly, are there cases where 32-bit floats would cause unnatural behaviour in a games setting?

Entropy
 
There are already prebuilt physics libraries available to game developers if they need them (such as Havok and MathEngine). They aren't open standards, but I don't think an open standard is really necessary.

Due to the way most physics algorithms work, a stand-alone physics coprocessor would have to be nearly as complex as the host processor (integrating over variable-sized arrays of forces, etc.) in order to do a halfway decent job, so it doesn't seem like there is a huge gain to be had by trying to create custom physics hardware.
 
Back
Top