Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
I have a question, is Guerilla's choice of going down the deferred rendering path, the primary reason why Killzone 2 looks as good as it does?
Ie it allows for complex lighting and high levels of geometry etc.
DR isn't magic. Look at Crysis or Doom 3: Both games don't use DR and have nice lighting and shadowing.
DR helps you to reduce pixel shader work. When you create the G-Buffer, your scene is finished. The next stage (Lighting/Shading) is some kind of post-processing. And you do it on visible pixels only.
But look on Crysis or Doom 3 (and many other titles): These games use another methods to shade visible pixels only. Crysis creates a very complicated Depth Buffer, Doom 3 does its Early-Z thing.

You should not forget, that Guerilla has dropped some goodies:
I was at the presentation yesterday. Some interesting ideas. Overall, dropping HDR, 4X MSAA for 2X, using only 12 (6) taps on 512x512 shadowmaps for the main directional light, dropping specular color for materials, dropping directional lightmaps, dropping shadows and per-pixel lighting on particles, only one lighting model for the entire world, for the sake of more lights (actually, for the sake of lighting performance non-dependent on the geometry but only on fragments lit which would be desirable) didn't seem worth it to me.

But I still hold to the idea of Deferred Rendering, the architecture is so simple and natural it hurts it's not a win this generation. I'm sure I can go deferred without sacrificing flexibility for the artists when DX10.x like hardware is standard on consoles too.

Ah, The Art Of Cryisis presentation was absolutely ace, the most interesting I've seen in years and I was the only coder in it. Those guys know their stuff.

And with regards to 360 games using full deferred rendering, apparently 360 titles have to use the edram (contains the render output units) so they can't create a G buffer in the 512 main ram, and render from there. So basically while deferred rendering is possible on the 360, there's no advantages in doing so.

So does this put the Xenos architecture at a disadvantage? Or can the 360 match it with optimised forward rendering?

http://forum.beyond3d.com/showthread.php?t=48155
 
I was also thinking that it's looks are due to more a combination of long development time, a large budget and some great art.

They did hire some of the most promising young artists I've seen on CG related forums in the past years...
 
I was at the presentation yesterday. Some interesting ideas. Overall, dropping HDR, 4X MSAA for 2X, using only 12 (6) taps on 512x512 shadowmaps for the main directional light, dropping specular color for materials, dropping directional lightmaps, dropping shadows and per-pixel lighting on particles, only one lighting model for the entire world, for the sake of more lights (actually, for the sake of lighting performance non-dependent on the geometry but only on fragments lit which would be desirable) didn't seem worth it to me.

But I still hold to the idea of Deferred Rendering, the architecture is so simple and natural it hurts it's not a win this generation. I'm sure I can go deferred without sacrificing flexibility for the artists when DX10.x like hardware is standard on consoles too.

Ah, The Art Of Cryisis presentation was absolutely ace, the most interesting I've seen in years and I was the only coder in it. Those guys know their stuff.

Killzone 2 not using HDR?
 
When people talks about deferred rendering they always emphasize the number of light sources. No one seems to talk about the fact that this is on PS3.

I'm willing to bet that when GG started developing their engine they were hoping to offload some pixel shading work to SPUs eventually because SPUs are good at post processing (in theory at least). While this may never be the case now, it seems they moved a lot of postprocessing to SPUs already which probably benefits from g-buffer as well.
If memory serves, based on their GDC presentation they weren't doing much postprocessing on SPUs, if any at all.

So it's quite likely that deferred rendering on PS3 makes more sense than other platforms.
 
I thought that was obvious from the gameplay footage. There's a bloom effect but no tone mapping.

Now that presentation Fran was talking about isn't exactly recent. Did you watch the video I posted? I'm not sure but I thought it mentioned (and showed) tone mapping.
 
Now that presentation Fran was talking about isn't exactly recent. Did you watch the video I posted? I'm not sure but I thought it mentioned (and showed) tone mapping.

From GDC07, their render targets already came to 36MB. I doubt they could go any bigger. Or change the spec that late in the game.

They also already said their lighting intensity and specular power and intensity has dynamic range of [0..2]. So even if it is not technically HDR, we will still see over bright blowout or HDR type effects which is what the average person associates with "HDR" anyway.

The video does show a particles and effects only pass that was not mentioned in the GDC presentation. Maybe that is all done by SPUs in main memory.
 
From GDC07, their render targets already came to 36MB. I doubt they could go any bigger. Or change the spec that late in the game.

They have been constantly changing the spec throughout development. Even now they are still adding things in. So don't write it off yet.
 
From GDC07, their render targets already came to 36MB. I doubt they could go any bigger. Or change the spec that late in the game.

They also already said their lighting intensity and specular power and intensity has dynamic range of [0..2]. So even if it is not technically HDR, we will still see over bright blowout or HDR type effects which is what the average person associates with "HDR" anyway.

The video does show a particles and effects only pass that was not mentioned in the GDC presentation. Maybe that is all done by SPUs in main memory.

Fair enough. And I'm guessing putting in HDR would mess with a lot of art, and is not something you can do this late in the game.
 
Now that presentation Fran was talking about isn't exactly recent. Did you watch the video I posted? I'm not sure but I thought it mentioned (and showed) tone mapping.

You can do tone mapping even with an 8/8/8/8 frame buffer if you want (some PS3 games already do this). There isn't enough precision there to qualify it as hdr, but you would still see exposure changes that people typically equate to hdr.
 
joker454 said:
You can do tone mapping even with an 8/8/8/8 frame buffer if you want
You can also do it without any framebuffer processing at all. And while that's subject to its own tradeoffs, it has no silly precision limitations at all, and would generally play nicer nicer with deferred renderers.

Arwin said:
The video does show a particles and effects only pass that was not mentioned in the GDC presentation
One thing that looks quite different in recent media is shadows - but just from video it's hard to tell if they improved texel distribution or the filtering is actually using more samples.
 
You can also do it without any framebuffer processing at all. And while that's subject to its own tradeoffs, it has no silly precision limitations at all, and would generally play nicer nicer with deferred renderers.


One thing that looks quite different in recent media is shadows - but just from video it's hard to tell if they improved texel distribution or the filtering is actually using more samples.

What do you see about SSAO in the videos ? we have doubts in the "console games" forum about the introducing of some way of contact shadows.
 
I dug up an interesting quote from the tech guy in GG back in 2007 in regarding to Contact shadows and realtime radiosity.
"Originally Posted by motherh
Right guys, I went and talked to Michiel and he sent me some answers:

"Contact shadows are a form of ambient occlusion. These are the sort of shadows you get where object come near each other (hence the "contact" part). Image-based ambient occlusion is a hot topic in the graphics industry off-late. We are researching a number of techniques but we're unsure if we're going to put it into Killzone 2.

We are also researching dynamic radiosity which does indeed mean real-time radiosity lighting. We have got some ideas, but if it works it'll be so late that we can't put it into this game, as we're steaming ahead in full production.

The PlayStation 3 does definitely have the horsepower to do all this though. If it won't be in KZ2 it'll definitely be in other titles from us or other PS3 developers. There's a lot of performance in the PS3 left to be untapped..."

There you have it. Hope this helps, and please note, we are not certain these techniques will be in Killzone 2, so take this as informtation, not news.

Seb Downie - QA Manager - Guerrilla Games "

Now as the game is near upon release, I wonder if they have indeed implemented the AO, realtime radiosity is out of the question though.
 
What do you see about SSAO in the videos ? we have doubts in the "console games" forum about the introducing of some way of contact shadows.

SSAO is one thing, contact shadows can be different (at least we use the term for something else: the dark splotches under objects that touch the ground).

SSAO seems to be there on new screenshots.

According to devs there's no SSAO (yet - they're still experimenting) so it's probably baked into the textures.
 
Last edited by a moderator:
And seeing as a lot of PS3 titles are using Quincunx AA, does the RSX do it for 'free'?
 
Last edited by a moderator:
Nope not free.

Same storage cost as MSAA but it blurs the picture, which can be percieved as higher polygon AA (althought its just blur)

People allways want to picture QAA as a bad cheap choice , but for everything alphatest it is much better IQ wise than with standard MSAA..
 
ModNotice: The topic is Killzone 2's rendering pipe and related tech, not other games, and certainly *NOT* the 360.
 
From another thread:
But doesn't KZ2 use the SPU's extensively for deferred rendering, ie the lighting and particle effects etc.

From the page 43 of Guerilla's KZ2 presentation:
SPU Usage
‣ We use SPU a lot during rendering
‣ Display list generation
‣ Main display list
‣ Lights and Shadow Maps
‣ Forward rendering​
‣ Scene graph traversal / visibility culling
‣ Skinning
‣ Triangle trimming
‣ IBL generation
‣ Particles​
They don't use SPUs for eg G-Buffer creation, so there is no extensive using of SPUs for deferred rendering?
 
Status
Not open for further replies.
Back
Top