Trials: evolution. [xbla]

sebbbi,

After seeing Epic's Unreal Engine 4 demo I thought of you. They are moving in a direction RedLynx's engines have accomplished in shipping products for years: Engines with dynamic effects that can be adjusted in real time and robust tools that allow fast game design as well as instantaneous iterative game play tweaking. Kudos for being an industry trail blazer! :D

The obvious caveat is graphics: Unreal Engine 4 is on high end PC hardware which affords it the luxury of a host of advanced rendering techniques while the Trials engine is on the Xbox 360. To think what RL would have done with that PC setup ;)

My first question is: Was there anything shown in UE4 that surprised you, good or bad? e.g. What did you think of their use of voxel cone tracing in sparse voxel octrees for GI? I didn't read anything about virtualized geometry (like Lionhead were promoting) or virtual texturing (although the scene of the mountains probably was a hint at that?)

Thinking forward, about 3-4 years to your next engine update on a next-gen console(s), what types of changes do you envision for your renderer and engine? How dependent are these advancement dependent on the hardware released? Is getting 2GB, 4GB, or 8GB of system memory going to have a big impact/deal breaker? (e.g. I hear the voxel tech Epic showed is very RAM intensive). How about the difference between a 1TFLOPs and 2TFLOPs GPU when making a 720p game at 60Hz? Are there techniques you would really like to use that the difference in memory or GPU performance is a "deal breaker" for certain techniques.

For sake of our discussion lets assume your engine is streamlined *but not limited to* a downloadable title but could, of course, be distributed on a Blu Ray disk. The target system would have about 1.8TFLOPs of GPU performance, 4 x86 cores with standard x86 SIMD units, 8GBs of RAM, and a HDD standard.

What highlights would your engine have on this?

What improvements to your current engine would you have? I can think of some of the more obvious low hanging fruit:

* Some of the grass clumps are flat sprites; also when your shadow overlaps any of the blades in the clump the entire clump gets shadowed. A next gen engine would probably have geometry based grass that casts/receives shadows.

* Objects behind particles are very pixelated (e.g. if you have smoke around a tire the tire looks very blocky). A next gen engine would either have a better filter for cut down buffers or you wouldn't need this hack to begin with.

* The edges of Shadow Silouttes are light when they overlap another shadow. This is simply a performance issue on the 360 as PC games aiming for 30Hz have solved this for years.

* Returning to the start of the map results in texture loading delays. I would think a platform with more memory could allow for the game to keep some textures in cache at certain check points to totally avoid this issue.

Other low hanging eyesores that stick out to you that you would like to address that would make a difference?

But bigger picture what direction would you see yourself going in terms of technology? You guys seems pretty forward thinking.

Finally, (a) would having a GPU drop from 2TFLOPs down to 1TFLOPs be a big drop in your opinion? Is it a deal breaker for some important techniques you, as a developer, would really like to implement?

And (b) on identical systems, lest one has 2GB memory and another 8GB of memory, would the disparity between a 2GB and 8GB system be a big enough difference that certain techniques that are a worthy investment on the 8GB just not be plausible? Do you see good work around for this or that the different would just require avoiding research and development in that area?
 
Gerry, what is the time to beat this week?

I just realized a Plat is 52 seconds and my current time is 1:05. That is a lot of time to shave.
 
My first question is: Was there anything shown in UE4 that surprised you, good or bad? e.g. What did you think of their use of voxel cone tracing in sparse voxel octrees for GI? I didn't read anything about virtualized geometry (like Lionhead were promoting) or virtual texturing (although the scene of the mountains probably was a hint at that?)
Real time lighting and real time GI are obviously good improvements, and allow much more dynamic worlds. Previous Unreal Engine was best suited for static environments. They have improved their tools drastically to allow faster development test/development/debug cycle. This is something that all developers must really spend time and effort, as content creation cost is rising all the time. UE4 looks really good indeed.
Thinking forward, about 3-4 years to your next engine update on a next-gen console(s), what types of changes do you envision for your renderer and engine? How dependent are these advancement dependent on the hardware released? Is getting 2GB, 4GB, or 8GB of system memory going to have a big impact/deal breaker? (e.g. I hear the voxel tech Epic showed is very RAM intensive). How about the difference between a 1TFLOPs and 2TFLOPs GPU when making a 720p game at 60Hz? Are there techniques you would really like to use that the difference in memory or GPU performance is a "deal breaker" for certain techniques.
You can only access a limited amount of RAM during a single frame (16.6 ms). Even with current technology, and the current memory sizes (512 MB), you cannot access all the memory in a single frame (the memory bandwidth isn't nearly enough). Ratio of computation performance and memory bandwidth has always been widening. Huge "lookup tables" are only usable with certain styles of access (access very small part of it every frame). Of course if the accesses are predictable, you can stream the data on demand (and small caches are enough). There are of course some algorithms that require lots of RAM (limited amount of highly unpredictable accesses around a huge structure).

I personally prefer smaller faster memory to larger and slower memory. EDRAM in Xbox 360 is one good example of this (it's very fast, but small, and still provides huge benefits).
* Some of the grass clumps are flat sprites; also when your shadow overlaps any of the blades in the clump the entire clump gets shadowed. A next gen engine would probably have geometry based grass that casts/receives shadows.
The terrain vegetation system used in Trials Evolution is actually fully geometry based (no sprites). However we calculate the lighting for each bush using only a single point (adjusted center point). This is a pretty good approximation as long as the lighting doesn't change radically around the bush. In shadow boundaries this obviously doesn't work well, and looks distracting as the whole bush is either fully lit or fully shadowed. We actually tried many different techniques for rendering vegetation, but this was the only one with acceptable performance (60 fps) and acceptable quality. We even tried half resolution vegetation (like in Motorstorm series for example), but our artists didn't approve it (it looks awful).
* Objects behind particles are very pixelated (e.g. if you have smoke around a tire the tire looks very blocky). A next gen engine would either have a better filter for cut down buffers or you wouldn't need this hack to begin with.
Basically a problem only in current 60 fps games. There's not enough time to upsample/repair the half resolution edges properly (and there's no performance to render particles at full resolution). This same issue can be seen in Gran Turismo 5 when you are driving in rain (they seem to have 4x4 lower resolution particles, so the issue is more pronounced).
* The edges of Shadow Silouttes are light when they overlap another shadow. This is simply a performance issue on the 360 as PC games aiming for 30Hz have solved this for years.
This is easy to fix. We just need filterable 32 bit float texture formats. EVSM does this pretty much automatically, but requires more than 16 bit to work properly.
* Returning to the start of the map results in texture loading delays. I would think a platform with more memory could allow for the game to keep some textures in cache at certain check points to totally avoid this issue.
We do not have any game logic specific texture caching. Everything is streamed based on visibility response from out virtual texture system. More performance would allow better compression technologies, and more memory would allow more prediction based streaming. Now we also need to even out the loading spikes for a longer time period (to keep 60 fps intact), and of course more performance headroom would allow us to load/process more virtual texture pages per frame. Of couse a bigger cache would help as well, but do we really want to use more than maybe 200 MB or so to hold the textures? We can already have as much texture resolution as we want. The limiting factor is not the system memory, but the storage space (= downloadable game size). Too big game, and we limit our customer base a lot (not everyone has an 10+ Mbit/s Internet connection).
Other low hanging eyesores that stick out to you that you would like to address that would make a difference?
Our ambient lighting looks horrible. We need at least SSAO/SSDO to make it look better. Real time GI of course should be the main goal for everyone in the long run.
But bigger picture what direction would you see yourself going in terms of technology? You guys seems pretty forward thinking.
Fully dynamic game worlds are of course what we want to be doing, and user created content is important too. I would personally love to explore radically new rendering techniques (like voxels), but there's only so much extra time to spend on research that doesn't benefit the current needs (and might never be viable). Being able to freely carve the terrain, freely deform the objects, etc would be a really good thing to have (especially for the user created content). It opens up so many different options.
(next gen speculation)
I am sorry, I can't contribute to the next gen speculation.
 
I personally prefer smaller faster memory to larger and slower memory. EDRAM in Xbox 360 is one good example of this (it's very fast, but small, and still provides huge benefits).
Don't settle, you can have both! That's what cache hierarchies are for after all ;) 360 is bit of a weird beast in that it's like a cache hierarchy but there are several levels missing. Most notably, both of the current consoles are missing several levels between fast DRAM and... optical ;) It has swung current console development into a bit of a weird world of spending far too much time optimizing exact memory footprints rather than increasing coherence and locality in general.
 
Last edited by a moderator:
Thanks sebbbi, I am sure we all look forward to when you can comment more. More specifically I look forward to you guys showing off some killer tech at a future GDC. I am sure a lot of developers could learn valuable lessons from your gameplay design.
 
First Come, First Serve:

First poster choosing a track will be the track for this week's competition.

I am sure some would prefer a shorter, faster, easier track so if you have one in mind shout it out before Robert lays claim on "Inferno 3" ;)
 
Results are in:

Trials Challenge Week 7 (Sun Jun 10th - Sat Jun 16th): Oil Rig Run (Cutting Edge Category, Hard)

PHP:
1. Acert93     00 faults   0:50.7
2. Gerry       00 faults   0:54.5
3. Robert      00 faults   0:59.0
4. Onkl        02 faults   1:11.3
5. Andrew      03 faults   1:21.8
6. Scott_Arm   05 faults   1:48.6
7. Rotmm       16 faults   2:37.2
 
I'm slowly finding my way through the extreme tracks. Beat The Wreck last night with less than 250 crashes, I think. Good job, me. 90% of those crashes were on two obstacles. The rest I got through without too much problem. Tonight I tackle Way Of The Ninja. Playing the extreme tracks definitely makes you learn the bike physics. I'm learning to control the acceleration a lot better, instead of just gunning it.

I also discovered the reason why the dark tracks in Trials were giving me so many problems. Way back at Christmas, I took my xbox to my parents' place and switched the video output setting to "standard" reference level. When I got home, I forgot to switch it to "expanded" and I'm gaming on a computer monitor that expects the full RGB range. That means over the last six months I played Mass Effect 3, Max Payne 3, Trials HD and others all with the incorrect display settings. Well, at least I can see what's going on in those shadows now.
 
Hey, where's me (punkUser)? :) I only played a few rounds so I don't think I even got a no faults run :( but I did at least give it a shot this week IIRC!


Alex = Andrew :oops: I updated... I have been really busy and tired, sorry about that. I think the last couple are like that... for that PLEASE choose this weeks track! :!:
 
Alex = Andrew :oops: I updated... I have been really busy and tired, sorry about that. I think the last couple are like that... for that PLEASE choose this weeks track! :!:
Oh I see now - haha ok, no issues :) Now I just need to get sorted out to play the tracks at the right time instead of getting my "first time through just to pass it" scores posted :D

For this week's track... how about Archipelago? Have we done that one yet? It's pretty :)
 
Archipelago it is! How about we make the end time the 30th of June since we are starting this one late--it will let all dads playing at 2am during a diaper change a chance to play :p
 
Archipelago it is! How about we make the end time the 30th of June since we are starting this one late--it will let all dads playing at 2am during a diaper change a chance to play :p

Laid down a decent marker for this one - around 43.3s. I reckon someone is going to have to get down to 42.5s to win, perhaps even lower, and that'll be getting well into the top 1K times.
 
Sebbbi

I was wondering about the way ghosts are implemented in the game. You've mentioned that replays basically take the (saved) controller state and run that through the physics engine. I take it ghosts can't do that since it would be prohibitive to do that. Do you pre-calculate the path of the marker in memory or something and use that to display the "ghost". I was just wondering whether the fact you use a simply marker rather than a true ghost is a reflection of whatever process you use.

And a follow-on from that : I've noticed that the number of ghosts you can play against is pretty variable; sometimes it's just 1, sometimes 2, and I've definitely had 3 displayed at some point. Is this limit somehow determined by how much memory is available or is there something else going on?
 
And a follow-on from that : I've noticed that the number of ghosts you can play against is pretty variable; sometimes it's just 1, sometimes 2, and I've definitely had 3 displayed at some point. Is this limit somehow determined by how much memory is available or is there something else going on?
By default the game shows you ghosts of your friends that are ahead of you in the leaderboard. If you beat one of the ghosts, you know that you have improved your placement in the friend leaderboard. We never show more than 5 ghosts, because it would clutter your view and make the game harder. There's no technical limitations. Technically we could show all of your friends (up to 100 ghosts) simultaneously.
I was wondering about the way ghosts are implemented in the game. You've mentioned that replays basically take the (saved) controller state and run that through the physics engine. I take it ghosts can't do that since it would be prohibitive to do that. Do you pre-calculate the path of the marker in memory or something and use that to display the "ghost". I was just wondering whether the fact you use a simply marker rather than a true ghost is a reflection of whatever process you use.
Replays are recorded controller states. We bit pack them tightly and (lossless) compress them. Result is around 2kb of data per replay (half an hour maximum). Replays can only be stored for limited amount of players, because we have limited server storage quota. And since replay playing uses the whole game engine (needs full physics simulation and full game world to simulate), we cannot show a replay simultaneously while you play the game (it would double physics processing cost and double memory usage for game world).

Ghost data in the other hand is directly stored to the leaderboard database row (utilizing several 64 bit integer database fields). It must fit to less than 200 bytes. The ghost data itself is a mathematical curve that we fit to the recorded bike position data points. We optimize the curve (minimize control points while keeping the error as low as possible), and we bit pack and compress it (using our own compression method designed just for this purpose). I am actually quite surprised that we got the ghost quality as high as it is with just 200 bytes of data. Showing ghosts is pretty much free (calculate position based on time and one curve segment), and because the ghosts are directly saved into the leaderboard we can have ghosts for all players.
 
Thanks sebbbi. Great answer as always. I'm amazed you can get the ghost data packed into 200 bytes. To be honest I'm amazed you can fit the replay data into 2KB as well.
 
Yeah, it's a great optimisation. I like the ghost system for Motorstorm RC a lot too (trace of the exact recorded position on the track), but unfortunately more often than not I don't get any ghosts at all, probably because the game can't load them from the online database fast enough.
 
Back
Top