Programming the next gen

Frank

Certified not a majority
Veteran
As with GPU's, you might want to take a totally different approach programming the next generation of games. Like, taking a data-centric view, and just making a lot of individual "transformations" for all that data, so it can be streamed.

Like, first walking through the data to find objects that have scripts or other actions attached, and throwing them in a stream (to the next process) that sorts them according to the actions that have to be taken, asmoveing them around and fireing off new actions and objects, like bullets. And having the output of those streams sorted (by another process) for transformations, changes in the meshes or sub-objects, and having all of them come together to be pre-processed, like calculating textures and skinning.

And the same would go for the individual stages, like moving the objects (collision detection and physics), modifying the meshes (fluids, volumes and physics again) and building and assembling objects by calculating and matching parts of them.

Like a tree, consisting of steams, which are all handled by separate threads and swapped on demand. Or like a database, that is sorted repeatedly and throws out objects to the steam that execute the change that is next in line for that object.

Does that sound like we all discussed it might have to be done?
 
Your description is a bit more dramatic than I had in mind, but it is clear to me that we need to shift to a style of batching up large jobs rather than doing everything whenever it is convenient.

A big example of this is collision detection and response. In earlier games I've worked on, each AI would make multiple queries into the collision system in a single update with movement and decision-making code mixed together. This style is very convenient to write but it has two serious detriments on modern systems:

1) Because you are bouncing around between the AI, the movement, the collision detection and who-knows-what other systems, you are thrashing your code and data cache like mad.

2) Because the collision detection methods are being called so frequently they have to be very low latency. Because you are doing tests one at a time, the system cannot afford to do any expensive setup that would make doing a large number of tests faster. You are getting low latency at the cost of a very low throughput.

If instead it was understood that at a certain point in the frame the collision detection system was going to take over the machine until it did all of the collision detection work necessary for that frame, it would be able to get that work done in much less time than the aggregate of the scattered requests. However, that means the AI can't get immediate turnaround on queries. It can only get results once per frame.

This style shift applies to many other areas as well.
 
corysama said:
Your description is a bit more dramatic than I had in mind, but it is clear to me that we need to shift to a style of batching up large jobs rather than doing everything whenever it is convenient.

A big example of this is collision detection and response. In earlier games I've worked on, each AI would make multiple queries into the collision system in a single update with movement and decision-making code mixed together. This style is very convenient to write but it has two serious detriments on modern systems:

1) Because you are bouncing around between the AI, the movement, the collision detection and who-knows-what other systems, you are thrashing your code and data cache like mad.

2) Because the collision detection methods are being called so frequently they have to be very low latency. Because you are doing tests one at a time, the system cannot afford to do any expensive setup that would make doing a large number of tests faster. You are getting low latency at the cost of a very low throughput.

If instead it was understood that at a certain point in the frame the collision detection system was going to take over the machine until it did all of the collision detection work necessary for that frame, it would be able to get that work done in much less time than the aggregate of the scattered requests. However, that means the AI can't get immediate turnaround on queries. It can only get results once per frame.

This style shift applies to many other areas as well.
interesting, thanks for the info. :)
 
Back
Top