First Killzone screenshot/details? So says USAToday..

At first, the touchups didn't seem like that big of a deal, but when comparing 2 shots side by side, the touched up shots make it seem like the lighting is worlds better.
 
[modhat]Let's not go there please. Console War theorems aren't welcome here. It's an edited image and we accept on faith that people's response here on this board is in response that idea alone, and not because of any bizarro agendas.[/modhat]

I wasnt speaking specifically about beyond3d. I was refering to the general notion of the media
 
why do some of you guy take the little color correction that seriously while you have the realtime video to judge the game?

exactly. besides, the final code may well be better than all those shots and we should just stop comparing those alpha shots already. either way they still look fantasbulous.
 
just wondering, would adding filters to produce those color corrected screens be a significant burden on the machine?
or would it just be a tiny bit of performance impact that gets lost in the midst of all the other things?
 
just wondering, would adding filters to produce those color corrected screens be a significant burden on the machine?
or would it just be a tiny bit of performance impact that gets lost in the midst of all the other things?

If I recall correctly, one of the major benefits of doing differed rendering is that it allows you to do post processing effects cheaply. I could be wrong though.
 
If I recall correctly, one of the major benefits of doing differed rendering is that it allows you to do post processing effects cheaply. I could be wrong though.

great, that's the same as what I recalled

I remember that during the E3, some media was shown the difference between turning on/off the postprocessing and noticed a huge difference.
Seems that it's the case

Also, I found that taking the original video and using some codec software and tuning down gamma by about 10 gives you something really close to the "touched up" screenshot. If it does look as good as people claim (therefore the hysteria :D) Gureilla really should add that filter


Edit: I ran through the original E3 video with the gamma correction from 1.00=>0.60 and hey! it does look better! :D
 
Last edited by a moderator:
So are these new shots really just touch ups for promotional purposes? Or are they an indicator for a slight change in art direction? Some shots seem quite a bit more vibrant to me--a change I'd welcome from the dismal gray that dominates what we've seen so far.

Here's to hoping it's the latter...
 
i think little bit of both, but more so the latter. the "touch ups" were very minor and the differences are mainly in art direction. so we can expect KZ to look more like the new shots than the old.
 
bad guerilla games, altering screenshots arent on in any form.
the changes dont look to be anything major surely they could of reran the game with the new filter/textures/shaders whatever + used grabs from that.
WRT the new screenshots, yes they are an improvement, the world wants more color, washed out brown/grey is so 2005
 
I guess new-builds dont exist, so what if they have slightly touched up shots, atleast they have the balls to admit it unlike loads of other dev's. Games improve with time and what needs to be decided is wheather the changes we are seeing is due to new builds or dev touching things up, or maybe even both.
 
If I recall correctly, one of the major benefits of doing differed rendering is that it allows you to do post processing effects cheaply. I could be wrong though.

Not really, at least not in Killzone's case. On some PC implementations where the device z-buffer isn't easy to access, using deferred rendering means that you've rendered a depth buffer that can be re-used for certain post-processing effects (like Depth of Field or motion blur). However for a PS3 game though I don't see how DR would make post-processing any cheaper than with a forward renderer, once you're post-processing a color buffer it doesn't really matter how you actually rendered that buffer.

The main benefit of DR is usually that you can have high numbers of dynamic lights in a scene. This is because DR decouples lighting from geometry, meaning that the cost of a dynamic light doesn't increase as geometry increases. Instead, the cost of a light is based on low much screen space it takes up. You can see Killzone taking advantage of this in a few places, like how the Helghast helmet eyes are actually a dynamic light source.
 
Not really, at least not in Killzone's case. On some PC implementations where the device z-buffer isn't easy to access, using deferred rendering means that you've rendered a depth buffer that can be re-used for certain post-processing effects (like Depth of Field or motion blur). However for a PS3 game though I don't see how DR would make post-processing any cheaper than with a forward renderer, once you're post-processing a color buffer it doesn't really matter how you actually rendered that buffer.

The main benefit of DR is usually that you can have high numbers of dynamic lights in a scene. This is because DR decouples lighting from geometry, meaning that the cost of a dynamic light doesn't increase as geometry increases. Instead, the cost of a light is based on low much screen space it takes up. You can see Killzone taking advantage of this in a few places, like how the Helghast helmet eyes are actually a dynamic light source.

I thought I was wrong, thank you for clearing that up. I knew that DR allowed you to do something cheaply, but I couldn't remember what.
 
Not really, at least not in Killzone's case. On some PC implementations where the device z-buffer isn't easy to access, using deferred rendering means that you've rendered a depth buffer that can be re-used for certain post-processing effects (like Depth of Field or motion blur). However for a PS3 game though I don't see how DR would make post-processing any cheaper than with a forward renderer, once you're post-processing a color buffer it doesn't really matter how you actually rendered that buffer.

The main benefit of DR is usually that you can have high numbers of dynamic lights in a scene. This is because DR decouples lighting from geometry, meaning that the cost of a dynamic light doesn't increase as geometry increases. Instead, the cost of a light is based on low much screen space it takes up. You can see Killzone taking advantage of this in a few places, like how the Helghast helmet eyes are actually a dynamic light source.

Doesn't it help somehow with culling, therefore allowing more shaders to be used (because nothing not in view is shaded)?
 
The main benefit of DR is usually that you can have high numbers of dynamic lights in a scene. This is because DR decouples lighting from geometry, meaning that the cost of a dynamic light doesn't increase as geometry increases. Instead, the cost of a light is based on low much screen space it takes up. You can see Killzone taking advantage of this in a few places, like how the Helghast helmet eyes are actually a dynamic light source.

I guess you missed the discussions on and video demonstrations post-processing in the Killzone engine?
 
The main benefit of DR is usually that you can have high numbers of dynamic lights in a scene. This is because DR decouples lighting from geometry, meaning that the cost of a dynamic light doesn't increase as geometry increases.
The first sentence is right, but not the second. When you have per pixel lighting, the cost of dynamic lighting doesn't really change with geometric complexity.

The reason DR lets you have lots of dynamic lights is that you can do stencil culling to light only the visible pixels in the range of the light. This could also be done with dynamic branching in forward rendering, but this would perform poorly on RSX, and it's unclear how well it would work with other GPU's. DR also lets you get away with using only one shadow map buffer that can be reused for each light (though the G-buffer memory cost is pretty high in the first place).

Doesn't it help somehow with culling, therefore allowing more shaders to be used (because nothing not in view is shaded)?
Culling savings are low, because rough front to back rendering with FR already culls most pixels very rapidly. The advantage that you do get is that all pixel pipes in a quad are used for the lighting part.

The disadvantage is that fast MSAA is tricky and costs gobs of RAM. Right now KZ2 deals with 2xAA by doing the lighting math twice for each pixel (which a forward renderer would never do), so net efficiency gains are debatable.
 
I guess you missed the discussions on and video demonstrations post-processing in the Killzone engine?

Rage has pretty much the same post processing effects. Call of Duty 4 too. Assassin's Creed? Source engine and Day of Defeat?

I think it's very simple to implement too, we've seen sepia tone and edgedetect filters running on Radeon 9800 level hardware IIRC. There's nothing revolutional or deferred rendering related in post processing IMHO. It's more about someone realizing how important this part is in offline rendering, and getting trained artists to make good use of it...
 
I remember the sepia tone shader for the 9800pro. Quite cool that it could be enabled for all games throughthe drivers (although all games didn't work with it).
 
Back
Top