Crackdown: shader simulates raytracing Tech

# Ambient occlusion textures - As part of the lighting setup, we run a radiosity simulation over the entire city which works by firing 16 million million photons (that is 16x (10^12) photons) into the geometry). Their effect is then measured at approximately 32,000 million points. We save out the results as textures and apply these to the environment. We wrote our own custom tools to achieve this which, as well as being of a much higher quality than those from our 3-D package, they also meant a 100X reduction in computation time!

# Surface textures - To get all the detail, blending, and radiosity effects in the environment requires no less that six texture maps - four for colour, and two for lighting. Also, the blends between all those maps vary across the surface to provide more variation which is a gigantic amount of data to set up on each surface. Again our need for full precision control meant we produced custom tools for the job.

# Outlines on everything – This is a fundamental part of the game style and we wrote three versions before finding a method that was antialiased and had a reasonable render cost. We're achieving the effect by leveraging the Xenon's MSAA hardware in a new and unusual way.

# Fake window interiors – The technology we're using for windows is the next generation beyond parallax mapping. For every pixel of every window, the shader simulates raytracing through the blinds, through the interior window box, and finally into the room until it hits the floor or ceiling. And after that, the full lighting equation is run.

# Procedural Sky – The whole sky including the clouds is completely procedurally generated. The sky is blue and sunsets are red because of Rayleigh scattering - the effect of blue wavelengths of light being more likely to bounce off air molecules than red ones are. We run an approximation to the double integral needed to compute Rayleigh scattering - and we do it for every pixel of sky, for every frame. In addition, the shape of the clouds is procedurally generated by the GPU. Like the sky, all the lighting on the clouds is run by the GPU for every single pixel of sky, every single frame.


Our decision to go with our impressive deferred lighting technique to allow thousands of lights visible across the world at any given time was the making of the night time vista and this was the icing on the cake that we think helped people to truly appreciate the decisions we’d made. This can only be hinted at in a screenshot but to see the entire city with thousands of streetlights to the distance, thousands of characters walking around and hundreds of moving vehicles with headlights really is an impressive sight

Well designed lighting was going to be the key to the success of the Crackdown look but after some months experimenting with the basic system, it was evident that it wasn’t going to cut it. The exaggerated palette was very sensitive to even the slightest lighting changes and on occasion this unfortunately meant we submitted screens from builds with some pretty garish colouring (our orange and purple extravaganza at X05 was not one of our finest moments )

This is very interesting.. a new use of the Xenon MSaa hardware? they are talking about eDram here, I suppose, and what's about the others techs, seems impressive, how can they do a imulation of 16 x 10^12 photons is beyond me
cudos to the devs
 
This is very interesting.. a new use of the Xenon MSaa hardware? they are talking about eDram here, I suppose, and what's about the others techs, seems impressive, how can they do a imulation of 16 x 10^12 photons is beyond me
cudos to the devs

I may be mistaken, but from the text it looks like that simulation is done offline to generate precalculated occlusion maps. In term of real-time stuff, the FSAA outlines and windows interior stuff are probably more impressive display of tech.
 
Uh, the radiosity solution he's talking about is clearly a pre-process done at dev time and stored on the disc. This isn't exactly something new, games have done this since Quake 2 (Quake 1 didn't use radiosity in its lightmap precomputation, just raytracing).
 
I may be mistaken, but from the text it looks like that simulation is done offline to generate precalculated occlusion maps. In term of real-time stuff, the FSAA outlines and windows interior stuff are probably more impressive display of tech.

Agreed.
the realtime part is "the shader simulates raytracing through the blinds, through the interior window box, and finally into the room until it hits the floor or ceiling. And after that, the full lighting equation is run."
I'm very curious about the unusual way to use the "xenos MSaa hardware"...
 
the realtime part is "the shader simulates raytracing through the blinds, through the interior window box, and finally into the room until it hits the floor or ceiling. And after that, the full lighting equation is run."

I believe this part probably refers to a parallax mapping (or virtual displacement mapping) technique like steep parallax mapping.
 
This is just about a dev finding a suitable MSAA solution for their project (for the outlined effect), because in their previous tries ("...until it had a reasonable render cost") the performance was too low so they had to use a different way to cope with the Xenon MSAA hardware .

Nothing special at all.
 
i agree with jesus, the propers tools for tiling should have just come too late in the developpement process to be used.
So I guess they've found some performance friendly technic to achieve some kind of AA.

Anyway I hope this thread will be interesting!
 
This is just about a dev finding a suitable MSAA solution for their project (for the outlined effect), because in their previous tries ("...until it had a reasonable render cost") the performance was too low so they had to use a different way to cope with the Xenon MSAA hardware .

Nothing special at all.

I suggest you read it again (multiple times if necessary), because your interpretation is flat out wrong (Either fanboi goggles, or English is not your native language). The effect is CREATED by leveraging the MSAA hardware, as they explicitly state, not just antialiased. Furthermore, they weren't looking for an MSAA solution (hardware versus... what, exactly, Jesus?) for the outline, they were looking for an outline solution, and one thing among the prerequisites was antialiasing. The other was reasonable rendering cost (it's going to be on every object, so spending an order of magnitude more on it than necessary isn't very reasonable, right?).

I think you have some misconceptions about MSAA in general (MSAA "solutions"?, coping with MSAA hardware?). I think the term you're looking for is tiling/solutions, not MSAA/hardware.

Otherwise, Jesus and liolio, I remember nothing about tiling, performance hit related to getting tiling in the engine, finding an unusual AA method, etc. Not in this particular posting.
 
Hmmm... the MSAA thing vaguely makes sense, after all, MSAA treats differently the difference between edge and non-edge pixels...

As for the "raytraced room internals", I don't think it has anything to do with so-called parallax mapping techniques. I would guess there's a cubemap associated with each window; the ray hitting the window pixel is "traced" inside the cubemap to fetch a texel. Of course, you'd use the same cubemap for all the windows on a building, maybe for many buildings - you'd still get that "something is moving behind the glass" parallax effect.
 
As for the "raytraced room internals", I don't think it has anything to do with so-called parallax mapping techniques. I would guess there's a cubemap associated with each window; the ray hitting the window pixel is "traced" inside the cubemap to fetch a texel. Of course, you'd use the same cubemap for all the windows on a building, maybe for many buildings - you'd still get that "something is moving behind the glass" parallax effect.
Yeah, from the way they word things and the fact that they are deferring lighting, I wouldn't be surprised if they're doing something like creating a deferred renderer's "giant-buffer" in the form of a cubemap (which can be precalculated offline) and putting that on windows, so that instead of rendering the actual interior, it just renders the window using this cubemap and so you can get any dynamic lighting info you want thrown in without having to render all this geometry which exists inside. The view direction is simply used to look up (maybe with some refraction as well) the cubemap.

Cute idea now that I think about it, but I'd only consider it if geometry was a major limiting factor.
 
To get all the detail, blending, and radiosity effects in the environment requires no less that six texture maps

The whole sky including the clouds is completely procedurally generated

as well as being of a much higher quality than those from our 3-D package, they also meant a 100X reduction in computation time!

For every pixel of every window, the shader simulates raytracing through the blinds

Great stuff! On the texture maps did he mean 6 maps before being packaged for the final game or are those required to remain seperate (ie dynamic) for the final game? How many texture layers are the latest games pushing now??

My only complaint is personally the art direction is bleh but i see where they are going with it. Well ... two complaints ... framerate = :cry:

Hopefully they iron out the frame rate before its shipped.


Cute idea now that I think about it, but I'd only consider it if geometry was a major limiting factor.
Isn't that the point though? If it is using a texture-based replacement then geometry is a nonfactor and they can add detail that would otherwise not be possible when added along with everything else in the frame/world. Personaly unless the interior is important I'd say save the memory for increased world detailelsewhere but like you said it is interesting tech and willlikely find good use down the road too.
 
Isn't that the point though?
Well, sure, but I was just saying that geometry isn't always your biggest problem. It's also one of those things where I get the idea that there are going to be cases where it really isn't worth it.

Games like Company of Heroes have 8 textures on some objects. Certainly 6 is doable.
I'm quite used to seeing 6-8 on most of our meshes, though we have a few objects where we go up to 14.
 
I'm quite used to seeing 6-8 on most of our meshes, though we have a few objects where we go up to 14.

:oops:

14 . live . layers . !... What system are you deving to? And why so many texture layers? doesnt it kill render time for bandwidth req - not to mention ram storage?
 
And why so many texture layers? doesnt it kill render time for bandwidth req - not to mention ram storage?

We recently did tests to measure the effect of more textures on performance on middle-to-high end PC cards (6600-7900GT), precisely to decide if we can go wild with textures. If you keep the bandwidth constant (that is, you use e.g. 1 texture 1024x1024 vs. 2 textures 1024x512 vs. 4 textures 512x512), you get only a small degradation of performance up to 8 textures, and a graceful degradation from there to 16 - definitely something you can live with if the effect is worth it. We tested with very simple shaders, with longer shaders you'd generally be able to use more textures, as the calculations will hide the hit from the texture fetches.

Raising the bandwidth kills you instantly.

What textures you might need you ask? One or two diffuse maps, normal map, gloss exponent/mask maps, self-illumination masks, ambient occlusion maps, player colorization masks and of course, an envmap and one or two shadowmap textures on top of it. That's before you get to any fancy stuff like PRT.
 
Are opacity maps used in games as well to blend 2 or more diffuse maps ? Or is there another method in realtime rendering ? And if you have Normal maps do you have aditional bump maps as well ?
 
Yeah, from the way they word things and the fact that they are deferring lighting, I wouldn't be surprised if they're doing something like creating a deferred renderer's "giant-buffer" in the form of a cubemap (which can be precalculated offline) and putting that on windows, so that instead of rendering the actual interior, it just renders the window using this cubemap and so you can get any dynamic lighting info you want thrown in without having to render all this geometry which exists inside. The view direction is simply used to look up (maybe with some refraction as well) the cubemap.

Cute idea now that I think about it, but I'd only consider it if geometry was a major limiting factor.

Maybe they went this way because had some processing power left, but didnt had enought time, resources or budget to actually model the interiors for each window in each building...

(I think its hard to believe its geometry thats holding this up because the buildings are fairly detailed and theres a lot of those on screen at once... Among with lots of people and cars on the streets...)
 
Back
Top