sebbbi, graphically, your game is pretty amazing. I like the compromises you guys made. There are some nitpicks (the bloom and aliasing on the rider needs to be addressed imo; less issues being explosion fire and the pixelated check points). But that is being pretty picky--you guys made a lot of good choices and the end result is really cohesive and looks very nice.
Yes, our artists have complained about my quickly done bloom filter and tonemapping implementation a lot
. It was our first Xbox 360 game, and I had to code many algorithms twice to find the most optimal performing one. At first the engine was based on LiDR (light indexed deferred rendering), and then it was forward renderer and after that a more traditional deferred renderer... At start we had stencil shadows for some months (direct port from Trials 2 SE PC game), but due to all the stencil shadow problems, we quickly switched to ESM shadow maps. I didn't have time to try a VSM/ESM hybrid, and I didn't have time to finish my parallel memexport based SSAO filter. During the project I got a volume texture based SSAO field renderer ready, but it took too much performance, and 30 fps was not something we wanted. For the next engine version (the next game) I think we will have lots of new stuff coming up. Trials HD was a good first try on Xbox 360, but we still have to go forward to match the graphics quality of the latest best looking AAA titles. Aiming at 60 fps is our goal again, and I have designed our next technology around it. I try to offload as much prosessing from the frame as possible. Not all things have to be fully updated every frame. Prediction and real time surface baking (now we have really fast GPU DXT compressor as well) are keys to reduce the frame workload in our next technology.
One of the most interesting technologies I developed during the Trials HD production was definitely the projected surface cache texturing system. We had two screen resolution surface cache textures that we projected to the geometry. One texture was rendered every 4th frame at predicted world state (physics engine integrator solved moving object positions) four frames in the future. The second texture ("past surface cache") was simply the last predicted frame. Both of these textures had object ID stored in the alpha channel. The renderer rendered the whole visible geometry sampling only these two projected textures (lighting, shadowing etc were already in the texture), and selected the pixel from the texture that was visible in the current object/camera view (object ID comparison for the pixel). If both surfaces resulted a cache miss (incorrect object ID) the last frame color was used (*), if both were visible, they were blended together (free 2xSSAO inside the polygon surfaces that were visible in the both projected surface textures). This system tripled our frame rate, but it made the moving shadows update only every 4th frame and we had some graphics artifacts (see *). In the end we didn't use this technology in Trials HD.
(*): The correct way to handle a cache miss would be to render the pixel again. However this is very expensive, as the cache miss areas are just few pixels each and are shattered around the screen. Using last pixel causes some artifacts if foreground is moving fast and background is moving to another direction for example.