On a wider gamedev level, it strikes me that this spatial representation, and larger game world representation, is a crux. Games are effectively a big arse database, with queries based on positions for 'physical' interactions, and queries of properties to enact data changes such as 'health', and queries of objects to add/remove. To see which objects have collided with a given object, that given is tested against all the others. You end up hammering RAM for everything, just to see if a property is of a given value, and we develop acceleration structures to work around this. Either you do lots of random access, or you structure your workloads as linear jobs accessing streamed data structures.I feel pretty confident in saying the primary problems for general game workloads right now are on the BVH building/streaming/LOD side, but I don't think there's a silver bullet.
RT is just more of the same. You have to test which objects lie on a trajectory, so have to test all the objects. BVHs help reduce the search and test requirements for RT, but it's still really a question of a database query and optimising for that. How do we find which 'objects' are satisfying certain criteria out of all the objects? BVHs are a workaround for our database modelling. If it were hypothetically possible to store every objects relationship with every other, we wouldn't have to worry about BVH generation and could just select from our database. That requires more resources than we have, so we are looking for workarounds, but I wonder if those workarounds are missing a trick somewhere?
I'm starting to think the whole computer topology and working models are far from ideal. Object oriented development seems a dreadful fit for game optimisation; everything should to be stored in RAM in a way to maximise query efficiency rather than make it easy for devs to visual what's going on. This is where we have data oriented engines, but we aren't extending that to data-oriented hardware. Hardware is still driven at the conceptual level by the concepts laid down in creating general process random-access compute machines, with the focus on calculations, as opposed to data manipulation machines designed for finding and modifying data in massive datasets.
Conceptually, maybe the ultimate game machine has a relatively small amount of calculation capability but a much stronger and more capable way of dealing with data? Instead of spending time computing acceleration structures to find data, could we have a more direct way of finding that data?
/morning pondering
Edit: In short, the solution for fast ray tracing optimisations should also be leveraged for fast physics queries etc. That is, rather than dealing with RT like one problem, and physics like another, and AI like another, perhaps a homogenised view can simplify workloads into a specific architectural solution that'd benefit from a different hardware philosophy?
Last edited: