Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.

Terarrim

Newcomer
Here is the post I created on psforums:

posted by doc evil on games thread:


Quote:
The Develop Conference will be taking place between July 24th to the 26th in Brighton, UK. Even though this event will be taking place after E3, we’ll get a chance to hear about some of the development techniques behind Killzone for the PS3.

Deferred rendering in Killzone 2
Michal Valient, Guerrilla-Games

Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it’s hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting. Deferred rendering, on the other hand, has been traditionally described as a nice technique for rendering of scenes with many dynamic lights, that unfortunately suffers from fill-rate problems and lack of anti-aliasing and very few games that use it were published.

In this talk, we will discuss our approach to face this challenge and how we designed a deferred rendering engine that uses multi-sampled anti-aliasing (MSAA). We will give in-depth description of each individual stage of our real-time rendering pipeline and the main ingredients of our lighting, post-processing and data management. We’ll show how we utilize PS3’s SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting. We will also describe our optimizations of the lighting and our parallel split (cascaded) shadow map algorithm for faster and stable MSAA output.

Take Away
The session will provide detailed overview and optimizations of modern rendering engine and parallel processing. Many of the topics are applicable for various gaming platforms.

Ok why am I re-posting this you may ask.

Answer = I done some digging about deferred rendering as I have never heared of this technuiqe.

Basicly it looks like there hasn't been enough power or the GPU's have been to restricte to do deferred rendering with MSAA up until present.

Here is a quote on this:


Quote:
Don't have any screenshots, I abandonned my deferred redenderer a couple of years ago.
But I recall that the artifacts that bothered me most was that sometimes it looked like there was no AA and most of the time the "edges" were enhanced, like a sobel edge detection filter albeit more subtle.
I.e the jaggy edges where enhanced, which defies the purpose of AA!

Looking further down I stumbled on this:


Quote:
So if I use only a GeForce 8800, I can program the resolve function, that's it ? Then I can have an antialiased resolve function for the color buffer and no anti-aliasing for the position and normals ... Is it correct ?

Can I do it just now ? what function to call (on XP/OpenGL) ?

Or you mean that I have to render to a 4X buffer (for 4X AA) then code AA myself ?

answer was:


Quote:
Custom resolves are supported on G80 and R600 (they are required for D3D10). In D3D10 you can load the different elements in an MSAAed buffer using the "Load" shader function with an extra integer parameter for which color index. Note that *all* of your attribute buffers should technically be multisampled (this is also supported on G80/R600).

Also note that MRT's are a totally separate issue here... even if you render one RT at a time, you will still have a problem with MSAA + deferred shading. MRT's are *just a performance optimization* - always remember that.

I assume the G80 OpenGL extensions expose something similar, but I'd have to check them out.

What you need to do for deferred shading is to load all of the different indices, resolve the lighting function, then average the results for all samples. So for 4x MSAA something like (in HLSL):

So if we go back to the original Killzone 2 devs posts we can speculate that either.

The RSX has some of the Nvidia G80 componants.

or

The Cell is able to emulate DX10 functions (or something completly new) for this type of thing to work?

http://www.gamedev.net/community/for...opic_id=447959

Thought this though deserved a new thread. Anyone agree/disagree or do you think I'm on to something?
 
Its interesting but RSX is'nt G80, GG are just very talented developers and have just found a way to do it on DX9 hardware, maybe or maybe not with Cells help. What about HDR? if they are doing MSAA does'nt that meen they cant be useing HDR? Well FP HDR atleast? Or maybe there using a completely custom colour space and renderer that allows then to have deffered rendering + AA + HDR?
 
We’ll show how we utilize PS3’s SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting

Would the above lighting alude to normal lighting or HDR?

Also anyone have any comments on SPE's being used for gemotry (in parallel) and rendering a large set of primitives?
 
IIRC, from the Killzone thread in the general console forum, it was suggested that using MSAA with deferred shading is likely enabled by the absence of API restrictions that might exist with DX9 in the PC space, rather than anything on a hardware level. I'm working off memory, though, so I'm open to correction on that.

Would the above lighting alude to normal lighting or HDR?

Doesn't indicate one over the other. Indirect lighting accounts for the contribution of light reflected/bounced by surfaces, and not just the direct contribution of light sources..but whether your dynamic range is high or not is a separate matter.


Also anyone have any comments on SPE's being used for gemotry (in parallel) and rendering a large set of primitives?

It sounds like the stuff presented in the Edge tools (in fact, IIRC the GDC Killzone demo was at partially pitched as a visual 'face' of Edge, so that would make sense). It sounds like they may also be doing some vertex shading on the CPU, beyond culling etc.

Most of this is being discussed in the other killzone thread, really..
 
Doesn't indicate one over the other. Indirect lighting accounts for the contribution of light reflected/bounced by surfaces, and not just the direct contribution of light sources..but whether your dynamic range is high or not is a separate matter.
They ought to be calculating indirect illumination from HDR values, as that's a principle factor to indirect illumination. Walls have to be reflecting very bright light to illuminate other surfaces by a detectable amount. If the engine is lighting in HDR, indirect illumination is bound to be HDR also. TBH I can't imagine an indirect lighting solver that isn't HDR! Does such a thing exist?
 
If you could point me in the right direction of the Killzone 2 discussion on this matter I would much appreciate it. Most of what I have seen in other threads didn't seem to have this information.

I wonder wether the Cell is doing All the gemotry (its stating parallel processing so that would suggest at least 2 SPE's working on it). And using the RSX as the paint job. Although the mind boggles at the thought of the Cell doing primitives, all the geometry with everything else it needs to do. (Must be a big physics engine if what we heared about the distructable environments are still in!).
 
They ought to be calculating indirect illumination from HDR values, as that's a principle factor to indirect illumination. Walls have to be reflecting very bright light to illuminate other surfaces by a detectable amount. If the engine is lighting in HDR, indirect illumination is bound to be HDR also. TBH I can't imagine an indirect lighting solver that isn't HDR! Does such a thing exist?


Is that an issue of computational precision versus preserving dynamic range in your rendered image (via higher precision surfaces or other means)? We had higher precision in shader computation before, or separate to having higher precision surfaces, so there may be a distinction to be made there. I'm just guessing really, though..
 
If you could point me in the right direction of the Killzone 2 discussion on this matter I would much appreciate it. Most of what I have seen in other threads didn't seem to have this information.

I wonder wether the Cell is doing All the gemotry (its stating parallel processing so that would suggest at least 2 SPE's working on it). And using the RSX as the paint job. Although the mind boggles at the thought of the Cell doing primitives, all the geometry with everything else it needs to do. (Must be a big physics engine if what we heared about the distructable environments are still in!).

Using Cell for vertex shading and only RSX's pixel shaders? That would get you really far!
 
I'm sure they're not idling RSX's vertex shaders. But they may be supplementing them, and more likely still, doing geometry manipulation that would be difficult or impossible to do on the vertex shaders. Some of the descriptions of the video shown at GDC might suggest where Cell could be playing a role in geometry manipulation. e.g.

as well as another sequence that showed shafts of light coming in through holes being blown in a structure.

If that's truly dynamic/physically modelled, it might suggest that Cell is altering model geometry..it possibly makes more sense to do that on a CPU if there is creation and destruction of geometry involved.

(As an asides, it sounds like a demo I was considering trying to implement a while ago, whereby you'd have a box and could riddle it with bullet holes that were actually holes rather than decals, and that would update the lighting model inside the box - all dynamically, nothing prebaked. I'm not sure if anyone's implemented something like that in a game yet at the level I was thinking of, but I always thought it would be cool for something like an in-vehicle sequence where it's fairly dark inside, and you're suddenly under attack, and can see your vehicle's armour deforming and opening up holes, with the outside light streaming in. Come to think of it, if you treat stuff like bullet holes as new light sources - which you may not, but it was one way I was considering - a deferred renderer might come in handy, since the number of light sources you have to handle could scale dramatically in that sort of context)

edit - could also make good use of a nice, dynamic indirect lighting engine. A 'bullet hole lightsource' would have the effect of a spotlight, directly illuminating only a small portion of interior (to the box, vehicle, whatever) surfaces..but for realism you'd want that light to bounce around the rest of the interior.
 
Is that an issue of computational precision versus preserving dynamic range in your rendered image (via higher precision surfaces or other means)?
Not sure what you mean. If I give an example, you can work out which of your things I'm talking about! If there's a room with a floor rendered at mid-grey with direct illumination, and hitting the back wall is a light source, for the reflected light from the wall to affect the floor by making it perhaps 10% brighter, the actual light reaching the back wall might have to be 100x brighter than the light directly illuminating the floor (depending on attenuation, surface colour, etc.) The subtle lighting effects of indirect lighting needs to be calculated from very high relative source values being reflected. A light source of 10x the brightness of a directly illuminated surface isn't going to have much noticeable reflectance. This is what I know from photography anyhow. I don't know if these can be hacked down to a narrow dynamic range and remain convincing. Still, when you've got HDR tone mapping being applied to the direct lighting, it makes no sense (to me anyway) to render the indirect illumination in a lower dynamic range.
 
Not sure what you mean. If I give an example, you can work out which of your things I'm talking about! If there's a room with a floor rendered at mid-grey with direct illumination, and hitting the back wall is a light source, for the reflected light from the wall to affect the floor by making it perhaps 10% brighter, the actual light reaching the back wall might have to be 100x brighter than the light directly illuminating the floor (depending on attenuation, surface colour, etc.) The subtle lighting effects of indirect lighting needs to be calculated from very high relative source values being reflected. A light source of 10x the brightness of a directly illuminated surface isn't going to have much noticeable reflectance. This is what I know from photography anyhow. I don't know if these can be hacked down to a narrow dynamic range and remain convincing. Still, when you've got HDR tone mapping being applied to the direct lighting, it makes no sense (to me anyway) to render the indirect illumination in a lower dynamic range.


I guess, what I was wondering was, if render surface precision mattered here versus the precision of the internal shader computation. Light source values aren't being rendered out, afterall, they're just used in the computation of our pixel values. I don't know enough here to say...perhaps someone will clarify.
 
I guess, what I was wondering was, if render surface precision mattered here versus the precision of the internal shader computation. Light source values aren't being rendered out, afterall, they're just used in the computation of our pixel values. I don't know enough here to say...perhaps someone will clarify.

I got some more information for you and shifty regarding new information on what light engine Killzone 2 devs are using:

Taken from PS3 forums posted by Epix

FYI:

GG has been working with Second Intention on it's indirect lighting solution called "World Light" for Killzone 2.

http://www.secondintention.com/a10
http://secondintention.com/portfolio/worldlight/

Basicly on looking at those websites you have the following info:

New client: Guerrilla

We have recently completed a research contract for Guerrilla, the developers of Killzone.

Following the success of the initial undertaking, we are now working on a longer-term graphics research and development effort for the studio. We're delighted to be working on such an interesting and challenging project.

and regarding worldlight:

One of our more recent research projects is a highly realistic lighting and shadowing solution for complex outdoor environments.

Worldlight uses a diffuse PRT solution to provide soft light and shadows resulting from direct and indirect illumination by an HDR skydome. On top of this, it adds a directional sunlight term which is correctly shadowed by the world geometry.

Objects in the world are lit using similar techniques, with their diffuse and specular lighting environments being created on the fly from their surroundings. This anchors the objects in the world and ensures that their lighting is always an accurate match for the local environment.

The resulting HDR image is then tone-mapped for display using one of several tone-mapping algorithms in real-time.

Second Intention developed Worldlight to investigate the state of the art in realtime rendering and bring together several techniques which have been presented to the real-time community over the last year but not thoroughly explored or integrated. If you are interested in Worldlight, how it works, and where it could go next, please contact us for more information.

Efficient PRT mesh format using 12 bytes per vertex - only as much data as a floating-point normal!
CPCA compression allowing arbitrary order of PRT simulation with the same per-vertex size and runtime cost. Higher orders cost more CPU time and CPCA basis table storage, but the GPU cost remains constant.
Multiresolution shadow maps giving crisp shadows both close-up on small details and in the distance on large buildings. Shadow map selection avoids use of the stencil buffer completely.
Depth guided screen-space shadow filtering for soft shadows without depth artifacts.
Local cubemaps for both specular lighting and PRT Spherical Harmonic diffuse lightsource generation.
Real-time implementation of Geigel and Musgrave tonemapping. This simulates the nonlinear behaviour of different negative and print film emulsions allowing the user to input film characteristic curve data and see the real-time world rendererd as though photographed with the film they have selected.


Obviously it's we don't know Exactly what the portfolio they worked on for the Killzone 2 devs but maybe this has something to do with the way the lighting is rendered with distrable walls letting light through etc.

Looking at the above features looks like this middleware would work a treat on the Cell (please see bolded sections).
 
I hope it will be something special since its 2.5 hours. If the game doesnt prove itself as the mindblowing work we are expecting and lasts that much in the presentation it will be a 2.5 hour presentation of dissapointment
 
I hope it will be something special since its 2.5 hours. If the game doesnt prove itself as the mindblowing work we are expecting and lasts that much in the presentation it will be a 2.5 hour presentation of dissapointment

ofcourse,
does that even need to be mentioned?
 
Real-time implementation of Geigel and Musgrave tonemapping. This simulates the nonlinear behaviour of different negative and print film emulsions allowing the user to input film characteristic curve data and see the real-time world rendererd as though photographed with the film they have selected.
I think this is something that will be part of the Killzone implementation. I remember reading an interview with someone from Evolution Studios (probably Scott Kirkland) who briefly mentioned that that Guerilla Games were adding some movie-like qualities to Killzone and he was obviously impressed from what he knew.
 
I hope it will be something special since its 2.5 hours. If the game doesnt prove itself as the mindblowing work we are expecting and lasts that much in the presentation it will be a 2.5 hour presentation of dissapointment

According to Tempy at GAF who works at Guerrilla, the people invited will be split into groups and shown the game separately..it's not one big long 2.5 hour demonstration for everyone.

Also, I'd mind your expectations :p I think a lot of people are expecting it to fall flat on its face, to be honest.
 
Status
Not open for further replies.
Back
Top