Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
From another thread:


From the page 43 of Guerilla's KZ2 presentation:

They don't use SPUs for eg G-Buffer creation, so there is no extensive using of SPUs for deferred rendering?
From what I gather, not extensively, but in some ways indirectly (probably makes little sense what I said lol!). I guess ultimately its RSX that renders what we see on the screen, but the Cell BE calculates several of these effects before hand (i.e. indirect lighting, shadows).
 
https://www.cmpevents.com/GD09/a.asp?option=C&V=11&SessID=8873


The Rendering Technology of KILLZONE 2
Speaker: Michiel van der Leeuw (Lead Programmer, Guerrilla), Michal Valient (Senior technology programmer, Guerrilla)
Date/Time: TBD
Track: Programming
Format: 60-minute Lecture
Experience Level: Intermediate

Session Description
This session presents an overview of the rendering techniques used in Killzone 2. We put the main focus on the lighting and shadowing techniques of our deferred shading engine and how we made them play nicely with anti-aliasing.

In the second part of the presentation we take a look at our engine from the point of view of the CPU and we describe our GPU-driven block based memory allocation system that allows us to generate resources for the GPU out-of-order on multiple SPUs.

Takeaway
Attendees will gain information about rendering technology used in KILLZONE 2 and they will learn about different optimization possibilities the Playstation 3 offers.

Intended Audience and Prerequisites

This lecture is intended for graphics programmers. Basic understanding of common rendering techniques is recommended.
https://www.cmpevents.com/GD09/a.asp?option=C&V=11&SessID=8874


The PlayStation 3's SPU's in the Real World - KILLZONE 2 Case Study

Speaker: Michiel van der Leeuw (Lead Programmer, Guerrilla)
Date/Time: TBD
Track: Programming
Format: 60-minute Lecture
Experience Level: Intermediate

Session Description
This session gives an overview of the usage of Cell SPU’s in the engine used to develop KILLZONE 2 for the PlayStation 3. The techniques used for AI, image- based post-processing (motion blur, depth of field, bloom) and other graphical algorithms are discussed. An overview of all of the SPU usage and scheduling in KILLZONE 2 is presented to establish what SPU’s can do in the real world, in a real game. Some thoughts and lessons learned are presented to help attendees in establishing which SPU uses might help them in their next project.

Takeaway
Attendees get an insight into the SPU techniques used in KILLZONE 2's engine - many of which are generic and can be applied to other game engines. Based on an overview of the entire SPU usage of the Killzone 2 engine, its weak and strong spots and advice based on this use-case, they can make better decisions about their own SPU development strategy.

Intended Audience and Prerequisites
This lecture is intended for game, AI and technology programmers who have to write or design SPU code. A high level overview of the functioning of a game engine is required. Specific knowledge about graphics algorithms or PlayStation 3 programming is recommended, but not required.
old?
 
I appear to have accidentally vapourised the other Killzone Engine discussion started by Solaris-somebody, but as someone pointed out, there's new info, so I'll bump this with a post from the KZ2 game thread.
 
:LOL: I thought I was hallucinating. Was going to post the 2 links above.

Am keen to learn more about the overall software architecture and frameworks. Would someone be able to describe how GG fit all the subsystems together (rendering, animation, physics, audio, AI). From what I have read in various places, these components are tightly knitted in KZ2 even from user's perpectives. Would love to know how the PPU, SPUs and GPU encode, share/exchange data to achieve the convincing game world.

e.g., Audio follow player movements closely, AI drives animation and at the same time an elaborate animation library enriches AI, dynamic lights of the animated entities cast shadow on other major entities, particles not only linger longer but also bounce around w.r.t environment, destructible world, ...

As an outsider, this is the first step for me to grasp the depth of their work. The deferred rendering is only a part of the system, and would be nice to know how it fits into the overall "game plan".

GG mentioned they can go further after KZ2. What would be the outstanding/missing features to look out for ?
 
Mod : This is the orignal post by Solarus that accidentally got wiped.

Hey guys, I'm from Xbox.com forums, but I was browsing other Xbox forums, and came across Team Xbox forums. One of the members seemed to know what he was talking about in regards to the engine. Just wanted to get some of you guys opinion on it. I think the game looks amazing regardless of it's platform, from a visual and technical prespective (but what do I know about tech? lol) Anyways, this guy says it's really not impressive when you break it down.


That's because you completely misunderstand the purpose of the EDRAM and seem to be ignoring the 256GB of bandwidth reading and writing to it.

Even if you wrote to it 6 times you'd still have ~40GB of rending bandwidth left over. That's ~ 60MB for frame buffer compared to KZ2's 40MB.

And, you forget that in the 360 the final frame is stored in main RAM, not the EDRAM. The EDRAM is used only to quickly build the final frame in tiles. This blows everything you just said out of the water because the EDRAM is not the 360's frame buffer.

The 360 uses main memory for that, just like the PS3. The only benefit to getting / compressing the entire frame buffer into the EDRAM is a massive improvement to available rendering bandwidth to do things like approximating global illumination or to make multiple lighting passes.

Halo 3 is a perfect example, where the resolution of the game was modified to allow both low and high dynamic range lighting to be applied to a scene, giving the game a contrast in lighting (range) that we haven't seen even in the PC space. Not that it can't be done on the PC, just that it hasn't thus far.


More ignorance. KZ2 lighting is impressive? lol
Here's a clue, it all 2D lighting. Everything is a 2D alpha blended light. Those 100-200 lights are just 2D.

It's just post processing 2D semi transparent "lights" on the screen. Mind you it's per pixel and they are using ray casting in order to accurately produce shadows from those 2D lights. So the result looks good.

One major criticism, it's going a round about way to accomplish something that existing engines already do in established ways. Doing this on PS3 is just a really great way to take advantage of CELL to make up for what would otherwise be a lopsided design.

For example, all other modern games don't need so many lights because they use big dynamic lights to light and shadow the entire scene, so they use a handful of lights to get the same results.

Modern games have been doing this for a long time. A good example would be Riddick on the original Xbox. It was the first console game to use 1 major and a few minor lights to light and shadow the entire scene with volumetric shadows over normal mapped textures. This is why that game looks so awesome and was before it's time.

KZ2 uses normal mapping as well, but instead of using a small number of big lights it uses 100-200 small 2D lights and composites them in post processing to effectively deliver the same result. Throw in some lens flare! and Wala!!! lol
http://forum.teamxbox.com/showthread...=608061&page=5

If it was just as easy to do with a (I assume forward renderer) using a traditional method, would that have saved GG time and money? Or does a deferred renderer play to the strengths of PS3 better?
Remember this is the tech forum, and there's plenty of scope to discuss what Guerilla Games are doing and how it compares to other games out there.
 
He might have been, I didn't go through their entire thread, It was too long to read and most of it really wasn't what I was looking for. Had I seen this forum before the other, I wouldn't have even browsed there haha. I was just looking for information on the engine itself and how they were able to do the lighting. (I know forward renderers are limited to 8 hardware lights right?) I became interested after I saw a video of Sev? Running away from something and I noticed the blue lights on his person actually gave off.....light.

In that regard, if forward renderers can do it just as easily, why don't more games do that? (specifically talking about small objects like that giving off light.)

Regardless of it's method, to achieve the visual look. I always looked at engine programming like graphic design. No one truly cares about the method you took as long as the end result looks good.
 
(I know forward renderers are limited to 8 hardware lights right?) I became interested after I saw a video of Sev? Running away from something and I noticed the blue lights on his person actually gave off.....light.

Not really, it is more up to the target platforms capabilities. But theres a large difference between shadow casting lightsoruces and non-shadow casting lightsources (both stil light up the surroundings). For example in the Crysis editor I could in a moderate room pack in 40 non-shadow casting wide area lightsources and it would affect perfomance little. But if I made them all cast shadows in real-time it introduces large perfomance drop. Just an example since I doubt the Crysis lighting engine part is deffered.

In that regard, if forward renderers can do it just as easily, why don't more games do that? (specifically talking about small objects like that giving off light.)

All in all they wouldn't, especially not if the lightsources are to cast shadows. But something close perhaps? I noticed that in KZ2 many small lights dont cast any shadows.
 
I know forward renderers are limited to 8 hardware lights right?)
no, that is per vertex + even then the number is unlimited cause u can use blending, anyways (i think) no game actually uses hardware lights, u can also have a infinite number of lights with forward rendering. I was actually testing my game today with over 100 (largish) lights onscreen and was getting ~30fps at > 720p (with hardware similar to a rsx), though 100lights onscreen is a bit of a mess, and oddly looked worse than just a few large ones. I dont know why that was to much info for the mind to deal with perhaps.

In that regard, if forward renderers can do it just as easily, why don't more games do that? (specifically talking about small objects like that giving off light.)
a lot of games struggle achieving 30fps @720 (many have to drop the resolution) + u want them to add even more lights!!!
well they can but expect them to run at wii resolutions

Regardless of it's method, to achieve the visual look. I always looked at engine programming like graphic design. No one truly cares about the method you took as long as the end result looks good.
visually if done right, both forward + deferred rendering with give the same result
ie same answer but just different methods of coming to it.
 
I want to know what sort of stuff that guy who writes on the Team Xbox forums smokes.
 
Last edited:
nAo, if you don't mind...

without violating any NDAs, what in his post leads you to this conclusion.
Sorry, as I'm not very tech savvy, although I'm taking a course to rectify some of that:)
 
Greetings.

Saw the term G buffer used, but not a clear explanation, well at least not to me :) Anyway, it is something along the lines of a memory space that contains the scene status before being rasterized?

Another question. The so called "particles" in computer graphics, what exactly are they, micro polygons?

Thanks to those that take the time to answer.
 
no, that is per vertex + even then the number is unlimited cause u can use blending, anyways (i think) no game actually uses hardware lights, u can also have a infinite number of lights with forward rendering. I was actually testing my game today with over 100 (largish) lights onscreen and was getting ~30fps at > 720p (with hardware similar to a rsx), though 100lights onscreen is a bit of a mess, and oddly looked worse than just a few large ones. I dont know why that was to much info for the mind to deal with perhaps.

Thanks for the correction on hardware lights. (One of our 3D modeling classes we talked about hardware (3DS Max) and I remembered the professor added light sources to a flat plane, after the 8th one, no others would light up. I thought that was a hardware based thing. (As she put it anyway.)

a lot of games struggle achieving 30fps @720 (many have to drop the resolution) + u want them to add even more lights!!!
well they can but expect them to run at wii resolutions


Well I don't mean packing in the lights just for the sake of saying you have alot of lights in your scene. But a few lights here and there would be nice. For instance without turning this into an X vs Y conversation. In Gears of war, the blue lights on their shoulders, I always though would have been nice if they actually illuminated and casted light on Marcus and crew.)
 
have a read of this from 5 years ago
http://developer.nvidia.com/object/6800_leagues_deferred_shading.html

Here's a clue, it all 2D lighting. Everything is a 2D alpha blended light. Those 100-200 lights are just 2D
actually Ild assume GG are smart enuf not to use eg 2d quads to add the lighting as there will be a lot of extra pixel shading

Mind you it's per pixel and they are using ray casting in order to accurately produce shadows from those 2D lights.
now that is impressive if theyre doing that, cause even pixar cgi movies dont do shadows that way!!!, unfortunately I dont think this will be happening this generation :)
 
nAo, if you don't mind...

without violating any NDAs, what in his post leads you to this conclusion.
Sorry, as I'm not very tech savvy, although I'm taking a course to rectify some of that:)

There's no NDA to violate, most of the stuff he wrote just doesn't make any sense.

2D lights? LOL! Halo 3 and its superior HDR tech? Double LOL!
 
Thanks for the correction on hardware lights. (One of our 3D modeling classes we talked about hardware (3DS Max) and I remembered the professor added light sources to a flat plane, after the 8th one, no others would light up. I thought that was a hardware based thing. (As she put it anyway.)
I don't remember for sure, but older (fixed function) versions of OpenGL might have had this limitation.
 
Greetings.

Saw the term G buffer used, but not a clear explanation, well at least not to me :) Anyway, it is something along the lines of a memory space that contains the scene status before being rasterized?
No. It contains the "pixel info" before pixel shading.

Traditional renderers tend to render individual polygons separately. But if your pixel shading is computationally complex you may not want to suffer from overdraw. One way is to render a buffer (g-buffer) that contains most of the info you need (like texture color, depth, normal or some other geometric or light related information) to do pixel shading but without doing the final calculation.

As a result, once the g-buffer is ready, you only shade visible pixels and the whole pixel shading becomes something like post processing done on g-buffer.

I'd imagine, it's particularly suitable for PS3 for that reason.
 
Well I don't mean packing in the lights just for the sake of saying you have alot of lights in your scene. But a few lights here and there would be nice. For instance without turning this into an X vs Y conversation. In Gears of war, the blue lights on their shoulders, I always though would have been nice if they actually illuminated and casted light on Marcus and crew.)

Mind you Gears of War 2 doesn`t have dynamic lighting. It looks impressive with outdated technology though.
They'll have a session on how to use static lighting on character model in the upcoming GC09.

Session Description
In our lecture, we'll discuss some of the advancements in rendering techniques used in GEARS OF WAR 2. First, we will present our approach to character lighting in environments with complex static lighting using a spherical harmonic basis to approximate the incident light from the environment. A number of local lights are extracted from this environment based on a trade-off between quality and performance, and then used to light the character. We will also reveal several approaches used to aggressively optimize Screen Space Ambient Occlusion, which is necessary for console hardware. And finally, we will discuss gore and blood rendering techniques.

Takeaway
The audience will gain information about the rendering technology behind some of the new effects in GEARS OF WAR 2.

Intended Audience and Prerequisites
Intermediate Graphics Programmers and Technical Artists, with some knowledge of current console hardware.

https://www.cmpevents.com/GD09/a.asp?option=C&V=11&SessID=8638
 
There's no NDA to violate, most of the stuff he wrote just doesn't make any sense.

2D lights? LOL! Halo 3 and its superior HDR tech? Double LOL!

Yes, i think his extended use of lol and general forum "speak" was a giveaway.
 
Status
Not open for further replies.
Back
Top