Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
Overall, dropping HDR, 4X MSAA for 2X, using only 12 (6) taps on 512x512 shadowmaps for the main directional light, dropping specular color for materials, dropping directional lightmaps, dropping shadows and per-pixel lighting on particles, only one lighting model for the entire world, for the sake of more lights (actually, for the sake of lighting performance non-dependent on the geometry but only on fragments lit which would be desirable) didn't seem worth it to me.

Well, considering the reactions on the E3 trailer, it looks like it was totally worth it, no? (sorry, too tempting ;) )

HDR != floating point render targets - even your eyes have lower dynamic range than the world around and therefore do the exposure control to tone it down. To do it in game, you don't need to have floating point textures, you just need to adapt your rendering based on amount of brightness of the visible scene Therefore any higher range source of scene luminance is enough to do this - why would you do expensive stuff, when you can have the same result cheaper and save memory and bandwidth.
Of course doing real-time HDR reflections is cool and probably needs floating point textures to have nice filtering, but Killzone is not a racing game, no :D

Anyway - been on that talk too and talked to GG guys a bit.

Trade-offs to go for single lighting model and no specular reflections seems to be totally ok in their context - why would you have all that when 99% of your scene objects have Phong and do not use specular color - I cannot imagine developing custom in-house engine without considering every single trade-off artists have to make for tech and vice versa. They also still seem to have custom artist created shaders for everything (albedo, normals, material parameters) but direct light model - guy even talked about custom skin shader with scattering during Q/A.

They were later talking about more than 100 on screen real-time lights and around 10 on-screen shadow casting. I would not like to build forward render engine for such requirements. I think most of the trade-offs they made seem reasonable with these numbers and I think 4xMSAA and directional lightmap might be too much extra memory, but who knows why they don't have it.

Didn't he talk about 1024x1024 shadow map splitted into 4 regions for main directional light? That is much more than just 512x512. Also doing 12 HW filtered (I think nVidia HW does bilinear since G3, no?) taps per pixel and light is 3x more than many other games do :)

Anyway, their IBL technique is really cool, gotta love those SPUs.

I'm still not decided about those particles - if you have IBLs for particles and these are mostly clouds and dust anyway, you get pretty good results, particle is just flat so per-vertex should be perfectly fine. But I think he mentioned that per-vertex is optional anyway and it seems that the destruction particles have normal-maps in the trailer.

To me it looks like they have quite a lot of places where artists make decision what quality they want for speed and there are more code paths based on this.

Also I heard from some other guy that they will to release the presentation for download but I think all presentations will be online after develop - so we can read it and analyze.
 
The way they use their light occlusion information to speed up the lighting phase for the directional light is interesting, I wonder if there's a simple way to move this idea to a non fixed-time-of-day scenario.
All their work on IBL on SPU is very cool, especially the way they time it and synchronise it and the pre-pass before they do lay-down the G-buffer. Clever stuff.
They have some interesting plans on adding contact-shadows, but, unfortunately, they didn't elaborate much on this.

Actually that was something that poped up in my head in the first post when you talked about directional light. So I assume that is a fixed light while I guess Fable2 needs a solution for all the time of the day and you can no have fixed lights as KZ2 and was going to aks if it is possible to redo what they are doing to fit Fable2, but I guess you are working on it now:smile: ...
 
Actually that was something that poped up in my head in the first post when you talked about directional light. So I assume that is a fixed light while I guess Fable2 needs a solution for all the time of the day and you can no have fixed lights as KZ2 and was going to aks if it is possible to redo what they are doing to fit Fable2, but I guess you are working on it now:smile: ...
Maybe generating a shadow volume that excludes penumbra (so it has only areas with deep shadows) would do. Since daylight does not change that rapidly in general, computing this in background during several frames should be ok.
 
Fran said:
He said they don't have a HDR solution, but simply store the lightmap term in a 0..2 range to allow overbrighting.
In a fully deferred shader, why would you need HDR storage at all? All you need is to "guesstimate" the right tonemapping koefficient.
As for blooms, they can always be tacked on separately, they're a damn hack regardless of what your HDR storage format is.

Anyway, is there some reason you believe stuff like shadomap resolution/taps is a result of the rendering approach? I mean those are the kind of tradeoffs that could very well be there regardless of how they rendered the scene.

bigtabs said:
Well remember we've had bloom in PC games for a lot longer than we've had HDR.
Bloom was in console games before HDR was even 'known' to gaming media. But that just comes with the territory, it's just that much more noticeable to observer then anything else that HDR allows.
 
They were later talking about more than 100 on screen real-time lights and around 10 on-screen shadow casting. I would not like to build forward render engine for such requirements.

I quite enjoy doing that tho :)

Didn't he talk about 1024x1024 shadow map splitted into 4 regions for main directional light? That is much more than just 512x512. Also doing 12 HW filtered (I think nVidia HW does bilinear since G3, no?) taps per pixel and light is 3x more than many other games do :)

I've seen engines in development with the first shadowmap being 2048x2048 with 25-taps filter and 1024x1024 for the other shadowmaps in the cascade and still supporting tens of shadowcasting lights. Four 512x512 shadowmaps seems quite a drop in shadow quality to me, to be honest.

But as you said, it's a matter of trade-offs: to go deferred, they need to save that memory from somewhere else.

Anyway, is there some reason you believe stuff like shadomap resolution/taps is a result of the rendering approach? I mean those are the kind of tradeoffs that could very well be there regardless of how they rendered the scene.

Pure speculation on my behalf. It's what I could understand from the talk.
 
Since I haven't been there, how much detail did the talk go into? Any stuff like how much memory do they actually loose on attribute buffers etc.? :p
 
Since I haven't been there, how much detail did the talk go into? Any stuff like how much memory do they actually loose on attribute buffers etc.? :p

Not much detail, if I remember correctly he mentioned they use three 32-bit MSAA render targets (one depth/stencil and two attribute buffers) and one light accumulation buffer.
But he went down in details on their use of SPUs and that was very interesting!
 
Maybe generating a shadow volume that excludes penumbra (so it has only areas with deep shadows) would do. Since daylight does not change that rapidly in general, computing this in background during several frames should be ok.
I like the naturalness of your nick. Welcome and thanks for sharing. All very interesting stuff so far. I'm really thankful for your (and Fran's, Fafalada's, etc) input.

I only own one console since I don't see the need -except for unique games like GT- for more than one. I am not rich, also. Are you working on any game (racing, fps, sports are my favourite genres) for the X360, like Fran or joker454? Dunno about Faf, nAo, DeanA, DeanoC and ShootMyMonkey, though.
 
While perusing the forums I found the following link:

Ninja-Matic on gamestrailers.com:
http://forums.gametrailers.com/showthread.php?t=134998

A huge forum topic dedicated to lighting with Killzone 2. Would this track with what you were being told Fran?

Besides the hilarity of the first few paragraphs...

Aside from the Muzzle Flash - you will notice yet ANOTHER detail associated with firing a weapon: Muzzle Flare! I believe this is a replacement for the typical LENS FLARE effect we see in many games. This is being used instead because... HEY! We don't run around with cameras in our faces when we're in battle (that's just how we roll)... so why should we get lens flare? Take note in the next picture the purple flare effect you see when firing your weapon:
http://img106.imageshack.us/img106/3178/muzzleflareaw8.jpg

By "replacement for lens flare", does he mean lens flare? Heh.

Take special note that the characters do NOT use any Bump-Mapping for effect. All detail in clothing/armor are achieved through EXTREMELY high amounts of polygon detail: Note the subtleness of the diffusion being applied to both the lighting and shadows on this Mini-Boss and how it helps make his incredible detail simply "pop"!

All polys? God, I'd hate to see their performance on any nV or ATi GPU today!

This is probably my favorite example of how detailed and accurate the lighting engine in this game is. Someone (a hater) said in a post that the Helghast's masks aren't actually lit up - that they are simply textures - because such minute details don't need that much attention. Here's proof that this naysayer was DEAD wrong. Note how the goggles on this trooper's mask light up his SLEEVE and the shadowing that is being created on his face-mask and sleeve because of this light. ALSO take note of this incredibly small detail of INDIRECT LIGHTING: take a look at the front of the helmet... notice the ambient lighting coming from the sleeve of the trooper is affecting his helmet? Remember... his goggles are UNDERNEATH the brim of his helmet. Goggles shine on to sleeve - sleeve then reflects diffused lighting onto the helmet... IMPRESSIVE!
http://img158.imageshack.us/img158/6940/helghasteyesoj8.jpg

The mind is truly an incredible thing. If you want to see something where there isn't...

of course, this guy should probably stop assuming that the light in the engine is in the same place as the light in the "fiction" if you will.

People should probably be looking for indirect lighting being cast from static pieces of the environment onto dynamic objects--like a guy with his back to a red wall having a red tint to his backside, rather than light being bounced off numerous pieces of his own body in realtime.
 
The mind is truly an incredible thing. If you want to see something where there isn't...

I am jealous of some peoples imagination, wish I had the ability to believe the illusion even in the end when the "dream-bubble" is broken (but then that would make me a fanatic)! :LOL:

People should probably be looking for indirect lighting being cast from static pieces of the environment onto dynamic objects--like a guy with his back to a red wall having a red tint to his backside, rather than light being bounced off numerous pieces of his own body in realtime.

I remember that BF2 has a indirect light 'hack'. For example in Wake2007 the water color would reflect on the heli underside and the sand color to amongst other stuff.
 
Last edited by a moderator:
Besides the hilarity of the first few paragraphs...



By "replacement for lens flare", does he mean lens flare? Heh.



All polys? God, I'd hate to see their performance on any nV or ATi GPU today!



The mind is truly an incredible thing. If you want to see something where there isn't...

of course, this guy should probably stop assuming that the light in the engine is in the same place as the light in the "fiction" if you will.

People should probably be looking for indirect lighting being cast from static pieces of the environment onto dynamic objects--like a guy with his back to a red wall having a red tint to his backside, rather than light being bounced off numerous pieces of his own body in realtime.
Obviously, his article is nothing to take seriously when he says "idiots do exist" or call some people "naysayers" when they believe the game looks grey (it's not because of some stormy day but because of the textures. 90% of them are, in fact, grey). Therefore, his post is more on the typical humour side of things.

While I think the post sounded interesting at first, he lost any credibility there.

Other than that looks like the game is devoted to Deferred Rendering, which is not a good thing if it does hurt the game as a whole, which totally seems to me that's the case.
 
Other than that looks like the game is devoted to Deferred Rendering, which is not a good thing if it does hurt the game as a whole, which totally seems to me that's the case.

Have you seen the trailer? If so i suggest you watch it again.
 
Yeah, I'm with you. I only see people praise this game for deferred rendering.

Yeah the devs must have choosen to make a DR engine becouse they felt it would be better for what they intend to make. DR engines always seem to push more real-time soft shadows and for all or almost objects compared to other engines. For example Stalker with all objects casting real-time soft shadows and all lightsources destroyable vs other non DR PC games.
 
Last edited by a moderator:
Status
Not open for further replies.
Back
Top