Killzone 2 technology discussion thread (renamed)

Status
Not open for further replies.
The trailer clearly shows HDR like effects. So similar to Heavenly Sword, I'm pretty sure he just means they aren't using floating point pixel formats for performance reasons. And not that they have not have a HDR solution.

kzhdrqm9.jpg

Interesting. I haven't watched the trailer to that much technical detail, I only know it looks really really good, hence my remark that if it was the case, it hardly matters :cool:
 
The trailer clearly shows HDR like effects. So similar to Heavenly Sword, I'm pretty sure he just means they aren't using floating point pixel formats for performance reasons. And not that they have not have a HDR solution.

He said they don't have a HDR solution, but simply store the lightmap term in a 0..2 range to allow overbrighting.
 
Originally Posted by Fran said:
I was at the presentation yesterday. Some interesting ideas. Overall, dropping HDR, 4X MSAA for 2X, using only 12 (6) taps on 512x512 shadowmaps for the main directional light, dropping specular color for materials, dropping directional lightmaps, dropping shadows and per-pixel lighting on particles, only one lighting model for the entire world, for the sake of more lights (actually, for the sake of lighting performance non-dependent on the geometry but only on fragments lit which would be desirable) didn't seem worth it to me.
So they can have that many "drops", and still look this impressive? Amazing. :D

By drop you mean they had it in the engine but took it out? For what reason, performance or that they were'nt necessary?
 
The trailer clearly shows HDR like effects. So similar to Heavenly Sword, I'm pretty sure he just means they aren't using floating point pixel formats for performance reasons. And not that they have not have a HDR solution.
You don't need HDR to have that kind of bloom, as long as you store somewhere some additional informations about the brightness of your pixels, even a single bit would be enough.

Marco
 
Which leads on to a very important plot point. HDR has become a checkbox feature to show a game's prowess, but the real point to remember is it's what on screen that matters. If it looks good, who cares whether it's using HDR or FP buffer or INT buffers or 4-bitplanes? HDr has it's uses, but it's not the only solution to getting a film-camera like image on screen. Where it's used only to apply bloom, it's actually a waste of resources. If you're not resolving detail in light and shadowed areas, and moving between them, other techniques can produce the same results. If KZ remains low-key throughout, HDR won't be necessary as the range of normal buffers will be enough for the degree of light variation that is present.
 
Good post.


Take Painkiller as a good example of what devs have achieved with bloom. Looks way better than a good deal of HDR games. Even Serious Sam 2 without HDR enabled still featured good bloom (it fit the artistic style very nicely).
 
Last edited by a moderator:
Fran said:
Some interesting ideas.

Fran ! It would help tremendeously if you could elaborate on the "interesting" part of their presentation. :)

Also, once they have this architecture up, what would be the possible next steps in your opinion ?
(Thanks a million for any pointers you can drop)
 
Last edited by a moderator:
Which leads on to a very important plot point. HDR has become a checkbox feature to show a game's prowess, but the real point to remember is it's what on screen that matters. If it looks good, who cares whether it's using HDR or FP buffer or INT buffers or 4-bitplanes? HDr has it's uses, but it's not the only solution to getting a film-camera like image on screen. Where it's used only to apply bloom, it's actually a waste of resources. If you're not resolving detail in light and shadowed areas, and moving between them, other techniques can produce the same results. If KZ remains low-key throughout, HDR won't be necessary as the range of normal buffers will be enough for the degree of light variation that is present.

I think HDR or not is not the point.
The point is the tools (aka flexibility) you give to the artists to do their job, which is creating something looking stunning. The more flexibility (given hardware and time constraints) the better. We, engine programmers, can't create something looking stunning, we should only make tools that the creative people use to "create".
Crysis is a fantastic example of this, in my opinion: give talented artists flexible tools and enough time to experiment and it's pretty much guranteed they will create magic.

When I say that the tradeoff is not worth it, I really mean that the flexibily you are not giving to the artists (HDR, specular color and so on) for the sake of more lights in the scene is, in my opinion, not worth it. The artists will come to you asking for more material parameters, for example, to do their job and you will have to say no. Saying no to an artist is something I personally never like. I love artists :D

Fran ! It would help tremendeously if you could elaborate on the "interesting" part of their presentation. :)

The way they use their light occlusion information to speed up the lighting phase for the directional light is interesting, I wonder if there's a simple way to move this idea to a non fixed-time-of-day scenario.
All their work on IBL on SPU is very cool, especially the way they time it and synchronise it and the pre-pass before they do lay-down the G-buffer. Clever stuff.
They have some interesting plans on adding contact-shadows, but, unfortunately, they didn't elaborate much on this.

Also, once they have this architecture up, what would be the possible next steps in your opinion ?
(Thanks a million for any pointers you can drop)

Actually, I don't have a clue :D
 
it makes sense for a dark and gritty game like killzone to not hav a fully implemented HDR solution. a game like crysis would need it, so does Lair and any game environemnt that would spend a great deal of time under the sun. to distribute the resource for HDI to other features is the smart move for killzone2.
 
All their work on IBL on SPU is very cool, especially the way they time it and synchronise it and the pre-pass before they do lay-down the G-buffer.
so i assume the early mentioned indirect lighting would b evident in their IBL since it's a method for GI rendering? i suppose the HDR like effect in the the above pic is their IBL at work?
 
I think HDR or not is not the point.
The point is the tools (aka flexibility) you give to the artists to do their job, which is creating something looking stunning. The more flexibility (given hardware and time constraints) the better. We, engine programmers, can't create something looking stunning, we should only make tools that the creative people use to "create".
Crysis is a fantastic example of this, in my opinion: give talented artists flexible tools and enough time to experiment and it's pretty much guranteed they will create magic.

When I say that the tradeoff is not worth it, I really mean that the flexibily you are not giving to the artists (HDR, specular color and so on) for the sake of more lights in the scene is, in my opinion, not worth it. The artists will come to you asking for more material parameters, for example, to do their job and you will have to say no. Saying no to an artist is something I personally never like. I love artists :D

Fran, what caused the inflexibility ? Is it because of the indirect illumination technology or is it because they are using SPUs ? Can this restriction be lifted when they upgrade the tools, or is there a theoretical limitations somewhere ? :)
 
Fran did they say what geometry the SPE's were producing was used for (i.e. distructable environments?)

Thanks in advance for any reply :).
 
Is it not possible that the tradeoffs were made in consensus between the computer nerds and the art team as a way of focusing on the art direction that they have already set and achieving their vision?
 
I would've thought so. It sounds like the art team wanted more lights and less other stuff.
 
Well the E3 2005 trailer was not with HDR ;)

Every offline render application uses at least 32 bit per channel, thus HDR color representation since the early '90s. Actually Lightwave - which has been used for the KZ CG - may be using even more (I've heard 128 bit per color).
 
Fran, what caused the inflexibility ? Is it because of the indirect illumination technology or is it because they are using SPUs ? Can this restriction be lifted when they upgrade the tools, or is there a theoretical limitations somewhere ? :)

I think what he's talking about is the restrictions that are inherent with using a deferred renderer. With DR, every single material property you need for the lighting pass has to be laid out somewhere in your G-Buffer. This means that you're limited in the number of material parameters you can use, and you may have to rely on compromises (like Fran mentioned, using a mono Specular term is common with DR), compression, or some other kind of trick to make your engine work.
 
Status
Not open for further replies.
Back
Top