Technical Comparison - Killzone 2/Killzone 3 vs Crysis 2 (console version)

So which games do you think are doing real HDR on consoles, using what precision on each platform, and how do they pull it off / in which phase?

And fwiw, many people have been here for five years or more, and haven't necessarily gotton more stupid. I for one literally knew nothing when I got here beyond programming basics and messing with tga files in memory, but I have learnt a lot from people here since and read a lot of papers. You can complain about the community all you like, but that's literally a negative contribution to the level of quality. As they say, if you want to improve a community, start with improving yourself.
 
Unreal Engine 3 is linear-space lighting, from what I found here. http://udn.epicgames.com/Three/TexturingGuidelines.html

Cryengine 3 seems to be linear-space HDR
http://www.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology

Naughty Dog seems to do linear-space HDR. I seem to remember something about Little Big Planet having a gamma correct pipeline, so probably them to. A lot of these links point back to some Halo 3 slide-set about HDR, so Bungie is probably linear-space HDR. The amazing Trials Evolution seems to be in the same boat.

I'm guessing there are more.

Just about everything I've looked at today has said that linear-space lighting is the only way to do "correct" lighting, and it seems to be that the DirectX and OpenGL APIs have been moved in that direction, with the linear-space approach becoming the standard way to think about doing lighting. Some of the blog posts and articles on the subject date back five years.
 
So which games do you think are doing real HDR on consoles, using what precision on each platform, and how do they pull it off / in which phase?

And fwiw, many people have been here for five years or more, and haven't necessarily gotton more stupid. I for one literally knew nothing when I got here beyond programming basics and messing with tga files in memory, but I have learnt a lot from people here since and read a lot of papers. You can complain about the community all you like, but that's literally a negative contribution to the level of quality. As they say, if you want to improve a community, start with improving yourself.

Oh so you now want to talk technical? You mentioned KZ3 pushes "definately" more geometry than C2 but no numbers or sources was mentioned to back it up. How about you start with that? Eyeballing doesn't count ;).

I've already posted links to info pertaining the lighting in C2, including stuff about their HDR implementation (http://www.crytek.com/cryengine/presentations/) but of course, it was ignored.
 
Unreal Engine 3 is linear-space lighting, from what I found here. http://udn.epicgames.com/Three/TexturingGuidelines.html

Cryengine 3 seems to be linear-space HDR
http://www.slideshare.net/TiagoAlexSousa/secrets-of-cryengine-3-graphics-technology

Naughty Dog seems to do linear-space HDR. I seem to remember something about Little Big Planet having a gamma correct pipeline, so probably them to. A lot of these links point back to some Halo 3 slide-set about HDR, so Bungie is probably linear-space HDR. The amazing Trials Evolution seems to be in the same boat.

I'm guessing there are more.

Just about everything I've looked at today has said that linear-space lighting is the only way to do "correct" lighting, and it seems to be that the DirectX and OpenGL APIs have been moved in that direction, with the linear-space approach becoming the standard way to think about doing lighting. Some of the blog posts and articles on the subject date back five years.

There is a good presentation by one of the Naughty Dogs, which explains a lot of the stuff, but I unfortunately cannot find it at the moment...

EDIT: here it is http://www.slideshare.net/ozlael/hable-john-uncharted2-hdr-lighting
But all those presentation, as well the ones you linked, offer no real mathematical explanation or physical background. This makes it very hard to fully understand...
 
Last edited by a moderator:
You know, for a supposedly technical oriented community, not knowing what HDR is useful for or why it's important, even though it's been a key feature of the renderers of every major game this gen (except KZ's) says how much this community has decayed, as pointed by Laa-Yosh.
Can we actually stop with the juvenile bickering about how rubbish everyone is for just a moment and actually discuss something relevant and technical?

I will repeat, once again, this singular consideration as a reference point for the discussion. What exactly is it about this image that means the clipping is due to a lack of HDR and that use of HDR would solve the clipping issues, given that the illumination range of the image does extend to full brightness?
 
Trying to understand Sebbbi's post and found the following links talking about gamma space vs linear space:

http://filmicgames.com/archives/299
http://molecularmusings.wordpress.com/2011/11/21/gamma-correct-rendering/
http://http.developer.nvidia.com/GPUGems3/gpugems3_ch24.html

Edit:
And another http://renderwonk.com/blog/index.php/archive/adventures-with-gamma-correct-rendering/

So basically, if you do LDR lighting in gamma space, any lighting math, in terms of adding or subtracting intensities, is going to be totally wrong (1 + 1 = 3), because gamma space is non-linear. If you work in HDR, in linear-space, adding lights of two intensities will equal a linear a sum of those intensities (1 + 1 = 2). The former might "look good", but it will be physically incorrect and difficult for artists to work with as the results of lighting will be inconsistent, less predictable.

Am I roughly right?

This is what I am understanding so far, not sure if correct though (please someone help out!!):

Lighting in gamma space and linear space per se have nothing to do with the format LDR or HDR!? Linear and gamma are just the available spaces where you do/have to do the lighting calculations.

As I understand it, if you do the conversion from linear to gamma space (which you need to account for correct e.g. adding of lighting sources), then you need high floating point precision (which I guess is just a sign of bad conditioning of the specific operation**).
Otherwise, you could probably get same output values for different input values due to arithmetic imprecision, I guess for input values 'close' to each other: say: the output 0.100000001 and the output 0.100000002 are both calculated as 0.1 due to insufficient arithmetic precision when using LDR instead of HDR.
This could then results in visible 'banding', i.e. a clustering of the output values for different input values 'close' to each other.

So, if I understand it: there are basically two problems involved: 1.) in which space to do the lighting calculations and 2.) with which precision.

These two choices are in a first glance separated from each other, but it seems once you choose to go the 'correct' route with transformations from one space to another, you need high precision arithmetic, i.e. HDR, to avoid artefacts in your output?
Or, again wild speculating, adding a third problem to the two above mentioned: 3.) could you change the transformation/operation from linear to gamma space to make it better conditioned(**), i.e. less sensitive to the arithmetic precision....by the way, what is NAO32 or LOGLUV?
 
Oh so you now want to talk technical? You mentioned KZ3 pushes "definately" more geometry than C2 but no numbers or sources was mentioned to back it up. How about you start with that? Eyeballing doesn't count ;).
He's most probably right about KZ3 pushing more polygons than C2.
We know KZ2 pushes well over 1 million polys in a typical level, then GG mentioned KZ3 is pushing 3x as much and on top of that 3x the draw distance. So in theory KZ3 could be roughly pushing 4-5 million polys compared to a typical 1.something million in C2.
http://community.killzone.com/t5/Ki...-3x-More-Polygons-Than-Killzone-2/td-p/618127
All that, you have to render at 1280 x 720 with a stable 30fps, this puts a lot more stress on the system doesn't it?
 
This is what I am understanding so far, not sure if correct though (please someone help out!!):

Lighting in gamma space and linear space per se have nothing to do with the format LDR or HDR!? Linear and gamma are just the available spaces where you do/have to do the lighting calculations.

As I understand it, if you do the conversion from linear to gamma space (which you need to account for correct e.g. adding of lighting sources), then you need high floating point precision (which I guess is just a sign of bad conditioning of the specific operation**).
Otherwise, you could probably get same output values for different input values due to arithmetic imprecision, I guess for input values 'close' to each other: say: the output 0.100000001 and the output 0.100000002 are both calculated as 0.1 due to insufficient arithmetic precision when using LDR instead of HDR.
This could then results in visible 'banding', i.e. a clustering of the output values for different input values 'close' to each other.

So, if I understand it: there are basically two problems involved: 1.) in which space to do the lighting calculations and 2.) with which precision.

These two choices are in a first glance separated from each other, but it seems once you choose to go the 'correct' route with transformations from one space to another, you need high precision arithmetic, i.e. HDR, to avoid artefacts in your output?
Or, again wild speculating, adding a third problem to the two above mentioned: 3.) could you change the transformation/operation from linear to gamma space to make it better conditioned(**), i.e. less sensitive to the arithmetic precision....by the way, what is NAO32 or LOGLUV?

You convert from gamma space (integer 0-255) into linear space (0.0 - 1.0 LDR or 0.0 - 1.0+ with HDR) and you do all of your lighting calculations before converting back into gamma space for output to your display. The reason is that if you have two lights, each with 10% intensity, then where they intersect should be 20% intensity, as it would be if you calculate in linear space. If you were to take the gamma space values and sum them directly, you would not receive the appropriate result because the range from 0-255 is non-linear. The mid-point of intensity is not 127, as you'd expect, because of the gamma curve.

LDR lighting, in linear-space or gamma space, suffers from clipping to black or white in areas that are too dark or too bright, respectively. HDR allows for a wider range of intensities, to preserve the quality of reflected and refracted light as well as preventing the clipping issues by simulating iris adjustment. You use tonemapping to bring colours back into visible range when using HDR.

Also, if you do lighting in gamma space, you can't do HDR. And lighting in gamma space is just plain wrong. The results of any calculation will be physically incorrect.

I'm curious to know exactly how they do lighting in Killzone 2 and 3. That one slide that refers to gamma space accumulation is the only thing I've seen, and it makes a reference to logluv colour space. I'm not sure what that's referring to, but I've found links a number of blogs talking about how Marco Salvi (Heavenly Sword) used LogLuv format for HDR, and that's a bit out of my understanding at the moment.
 
He's most probably right about KZ3 pushing more polygons than C2.
We know KZ2 pushes well over 1 million polys in a typical level, then GG mentioned KZ3 is pushing 3x as much and on top of that 3x the draw distance. So in theory KZ3 could be roughly pushing 4-5 million polys compared to a typical 1.something million in C2.
http://community.killzone.com/t5/Ki...-3x-More-Polygons-Than-Killzone-2/td-p/618127
All that, you have to render at 1280 x 720 with a stable 30fps, this puts a lot more stress on the system doesn't it?

What does "Generates three times the polygons" mean exactly? Does that mean there can be three times as many visible polygons in a scene, or does that mean levels contain three times as many polygons most of which would be non-visible at one time?
 
I've been digging around a bit, and came across this interesting presentation:


http://www.youtube.com/watch?v=QfvBIHFex9Y&feature=player_embedded

... that shows a lot of actual meshes, and their (high and low) detail levels. The presentation shows that it is definitely true that a lot of the detail you see is 'tricks'. However, there is also an interesting bit that shows the difference in detail they've managed to get out of their engine for Killzone 3 vs Killzone 2, and the advantages they've been able to get out of their deferred renderer, as well as how they did a lot of their lighting, shading, etc. It answers, I think, your question Scott_Arm, showing that it is about visible detail in the scene rather than more efficient LOD/culling.

Very cool presentation actually.
 
I've been digging around a bit, and came across this interesting presentation:


http://www.youtube.com/watch?v=QfvBIHFex9Y&feature=player_embedded

... that shows a lot of actual meshes, and their (high and low) detail levels. The presentation shows that it is definitely true that a lot of the detail you see is 'tricks'. However, there is also an interesting bit that shows the difference in detail they've managed to get out of their engine for Killzone 3 vs Killzone 2, and the advantages they've been able to get out of their deferred renderer, as well as how they did a lot of their lighting, shading, etc. It answers, I think, your question Scott_Arm, showing that it is about visible detail in the scene rather than more efficient LOD/culling.

Very cool presentation actually.

Cool. I'll watch it when I get home. I kind of wonder why GG didn't go with the LogLuv format HDR that nAo talked about using with Heavenly Sword, that has been used in at least Uncharted 2, 3 amongst other games. It seems to have become the standard way of doing HDR on the PS3. I'm trying to learn about it now, and it actually seems pretty smart. From what I've found, PS3 gpu can't multisample fp formats, making standard fp format HDR methods unfeasible. So nAo went with this LogLuv format, which actually looks really good. Color accuracy is not perfect, but error is not perceivable to our eyes. Luminance is very accurate and has a very broad range. I guess it involves some extra calculation as a tradeoff, but it's low bandwidth.

Anyway, I'm curious to know why GG didn't go the LogLuv HDR route, and also curious to see which approach Cryengine 2 takes on PS3, since fp16 is supposed to be a poor performer.
 
I will repeat, once again, this singular consideration as a reference point for the discussion. What exactly is it about this image that means the clipping is due to a lack of HDR and that use of HDR would solve the clipping issues, given that the illumination range of the image does extend to full brightness?
Laa-Yosh already dealt with that, but then you said that it wasn't a shortcoming but rather a developer decision ("it's not a bug, it's a feature") :rolleyes:

HDR is important not only for the precision of the calculations, but because it preserves lighting information that is beyond the current exposure. Guerrilla tried to compensate by having both indoor and outdoor areas lit with the same brightness but then you can see small lamps shine as brightly as the sun. They're stuck with overblown whites all over the place too.

Crysis 2 takes good advantage of HDR rendering, specially when it comes to reflections (part of the reason the metal shader looks so good). Since it supports eye adaptation, it's effects at different exposures are very noticeably. HL2's Lost Coast demo commentary gives a good example of why this is. If you have a material with reflectivity set at say 20%, reflections of very bright sources (higher than the current exposure) such as the sun, will look dull and unrealistic. HDR solves that problem.

He's most probably right about KZ3 pushing more polygons than C2.
We know KZ2 pushes well over 1 million polys in a typical level, then GG mentioned KZ3 is pushing 3x as much and on top of that 3x the draw distance. So in theory KZ3 could be roughly pushing 4-5 million polys compared to a typical 1.something million in C2.
http://community.killzone.com/t5/Ki...-3x-More-Polygons-Than-Killzone-2/td-p/618127
All that, you have to render at 1280 x 720 with a stable 30fps, this puts a lot more stress on the system doesn't it?
Why are you compounding the 3x draw distance with the 3x increase in geometry output? Source please, a second hand forum post is not exactly reliable. Please provide sources for Crysis 2 numbers too so we can actually compare.

Also, limiting the discussion to simply polycount + IQ + framerate is very limiting. That leaves out lighting, shading, post processing, animation, particle effects, etc...
 
Laa-Yosh already dealt with that, but then you said that it wasn't a shortcoming but rather a developer decision ("it's not a bug, it's a feature") :rolleyes:
Explain to me how their failure to implement correct lighting results in greater than 200 intensity pixels in the centre of the image but clipping to 200 intensity around the edge of the image. Someone giving me a technical explanation will shut me up. Otherwise, given my understanding, there is no way at all that the bright highlights can be rendered correctly towards the centre of the image and the end of the gun barrel but clipped towards the outside other than that being an artistic decision.

Rather than just roll your eyes as if completely obvious, actually explain the evidence we have in front of us and how the clipping scenario Laa-Yosh points to is categorically a fault of the lighting engine and not an artist choice. Not one person so far has explained the situation, choosing instead to deride those asking questions.

HDR is important not only for the precision of the calculations, but because it preserves lighting information that is beyond the current exposure. Guerrilla tried to compensate by having both indoor and outdoor areas lit with the same brightness but then you can see small lamps shine as brightly as the sun. They're stuck with overblown whites all over the place too.
Absolutely, and I'm sure there's actual merit to this discussion. However, those wanting to argue against KZ3's choice of implementation haven't addressed one issue among many in a fair way, instead claiming bias, prejudice, or incompetance on the part of the opposition. AFAICS in this particular example the response from Laa-Yosh or his advocates should be, "yeah, okay, the clipping around the edges isn't a fault of the lighting engine. My bad on that one. But if you look at this other image or that one or this vid, the shortcomings of LDR in KZ3 are obvious as shown by this artefact and that artefact." That or an explanation of how the highest intensities in the image are clipped to 200 but lower intensities are rendered brighter. Clipping light sources as you describe is an example of LDR hitting its limits and worth presenting as a compromise KZ3 made that other games haven't (although photographically correct if tone-mapping to a small selection of brightness like a camera rather than trying to capture human perception), but selectively clipped highlights isn't.
 
L. Scofield said:
Laa-Yosh already dealt with that, but then you said that it wasn't a shortcoming but rather a developer decision ("it's not a bug, it's a feature")
What? Shortcomings in methods are developer decisions. Code doesn't write itself. Every line was consciously put there, and every method programmed was consciously chosen by someone for a reason. For example, sebbbi just mentioned his team went with a [0,2] color range (which is what KZ3 uses) in order to maintain 60 fps. It wasn't an accident or a mistake. Or do you think John Carmack meant to put FP16 RGB colors in Doom all along, and just accidentally programmed the whole thing to use 8-bit color?
Let's just take a look at fearsomepirate's post: "Crysis 2 only has couple of good effects, everything else sucks.
Is anything on my list false? Or do you simply object to anyone stating facts that you wish were not true?
KZ2/3 are gen above the competition, except when it comes to HDR but that's not a shortcoming, it's a feature"
...and I never said that. I know some of our members speak English as a second language, but I don't think you're among them. What I have said consistently is that GG did not use a FP RGB space as a compromise in order to achieve other things that C2 doesn't (the converse is true; Crytek didn't do some things in Crysis 2 in order to achieve certain things that Killzone doesn't do). You've never actually offered any evidence or reasons that isn't true; you've just mocked me as though I don't know what HDR is and have never programmed a physics simulation in my life.
 
Last edited by a moderator:
What? Shortcomings in methods are developer decisions. Code doesn't write itself. Every line was consciously put there, and every method programmed was consciously chosen by someone for a reason. For example, sebbbi just mentioned his team went with a [0,2] color range (which is what KZ3 uses) in order to maintain 60 fps. It wasn't an accident or a mistake. Or do you think John Carmack meant to put FP16 RGB colors in Doom all along, and just accidentally programmed the whole thing to use 8-bit color?...

Does KZ3 use a [0,2] color range? That's what I'm trying to interpret from the one slide image that was shown in this thread. When they refer to light accumulation in gamma space, is that suggesting they're doing non-linear calculation in gamma space RGB 0-255, or is that referring to something else? It also mentions limited precision and dynamic range.
 
Explain to me how their failure to implement correct lighting results in greater than 200 intensity pixels in the centre of the image but clipping to 200 intensity around the edge of the image. Someone giving me a technical explanation will shut me up. Otherwise, given my understanding, there is no way at all that the bright highlights can be rendered correctly towards the centre of the image and the end of the gun barrel but clipped towards the outside other than that being an artistic decision.

Rather than just roll your eyes as if completely obvious, actually explain the evidence we have in front of us and how the clipping scenario Laa-Yosh points to is categorically a fault of the lighting engine and not an artist choice. Not one person so far has explained the situation, choosing instead to deride those asking questions.
The clipping at the sides is obviously due to a vignetting effect, but your assertion that the highlights are correctly rendered at the center of the screen is wrong. They're completely overblown. Just look at the finger to the left of the barrel.

Absolutely, and I'm sure there's actual merit to this discussion. However, those wanting to argue against KZ3's choice of implementation haven't addressed one issue among many in a fair way, instead claiming bias, prejudice, or incompetance on the part of the opposition. AFAICS in this particular example the response from Laa-Yosh or his advocates should be, "yeah, okay, the clipping around the edges isn't a fault of the lighting engine. My bad on that one. But if you look at this other image or that one or this vid, the shortcomings of LDR in KZ3 are obvious as shown by this artefact and that artefact." That or an explanation of how the highest intensities in the image are clipped to 200 but lower intensities are rendered brighter. Clipping light sources as you describe is an example of LDR hitting its limits and worth presenting as a compromise KZ3 made that other games haven't (although photographically correct if tone-mapping to a small selection of brightness like a camera rather than trying to capture human perception), but selectively clipped highlights isn't.
The actual problem was that Laa-Yosh observation was simply dismissed as a subjective preference instead of what it really is: an objective technical matter. That was his issue with the argument. There was the immediate "but C2 compromises more on framerate so KZ3 wins" spinning as well.

Then of course was fearsome pirate's misinformation about deferred shading preventing the use of HDR even though I provided examples of other engines that support both. Of course that was simply ignored.

Now, on the other hand of the discussion, I'm still waiting for polycounts and SOURCES instead of simply "look at this image, KZ3 obviously pushes more polygons". How about you call some of that out instead of fixating on a single image?

==========================================================
EDIT:

What? Shortcomings in methods are developer decisions. Code doesn't write itself. Every line was consciously put there, and every method programmed was consciously chosen by someone for a reason. For example, sebbbi just mentioned his team went with a [0,2] color range (which is what KZ3 uses) in order to maintain 60 fps. It wasn't an accident or a mistake. Or do you think John Carmack meant to put FP16 RGB colors in Doom all along, and just accidentally programmed the whole thing to use 8-bit color?
What's your point? Of course the LDR implementation in the KZ engine is developer decision. Does it have its shortcomings? YES.

Is anything on my list false? Or do you simply object to anyone stating facts that you wish were not true?
Mhh...

-No AA most of the time.
There's AA all the time, it just isn't very good.

Enemy models were pretty simple, and there was a pretty obvious tradeoff between number of mobs and scene complexity.
Simple in what way? What are these "obvious" tradeoffs you're talking about?

Dust/haze/fire/smoke pretty primitive most of the time.
Primitive? What does that even mean?

-Not much of the way in effects other than lighting.
Define "effects".

-Reflective/metallic surfaces look awful (i.e. the alien ships and alien armor).
Awful? Awesome technical term. And you're wrong BTW.

-Indoor areas didn't look very good, since a realistic light bounce has less dramatic results there, and the lack of everything else we've seen this gen is more apparent.
"Lack of everything else we've seen this gen". Another completely meaningless statement.

.
..and I never said that. I know some of our members speak English as a second language, but I don't think you're among them. What I have said consistently is that GG did not use a FP RGB space as a compromise in order to achieve other things that C2 doesn't (the converse is true; Crytek didn't do some things in Crysis 2 in order to achieve certain things that Killzone doesn't do). You've never actually offered any evidence or reasons that isn't true; you've just mocked me as though I don't know what HDR is and have never programmed a physics simulation in my life.
I did offer evidence. The CG intros of both games. Both are HDR renderer and look far better than the realtime game without compromising the aesthetics. Do you really think that if they had the opportunity to add HDR support to their engine they'd just prefer to use LDR? Get real. Their use of LDR IS a compromise.

More interesting is the fact that you're only interested in listing the shortcomings of C2. Not even once mentioning the slightest negative thing about KZ2/3. Only denying/minimizing them when presented to you. Pretty telling.
 
Last edited by a moderator:
You can't make a blanket statement that game X isn't HDR - or is HDR. In some ways the term is meaningless without context. And that context is individual parts of the rendering pipeline of the game.

Problem is, I don't see 'HDR' as being a good term to use here. Because different stages and data in the pipeline have different data requirements - in terms of both range that data covers and the precision of the data. The two have a drastic impact on one another, but must be treated differently.

...

In general, most games share an essentially similar abstract rendering pipeline. How each element is generated, stored and used may differ (deferred, forward, pre-pass, etc) - but at the end of the day very similar data is required to generate the output image.

That data pipeline looks something like this:

pipe.png


So. Lots of little parts that add up to the whole. Let me run through them; Note that most are optional, some can be combined and some are intentionally left out or simplified out for sanity sake.

Geometry:
Things like normals, depth, etc.
For hopefully obvious reasons this data needs to be as precise as possible, ranges may be constrained but position/depth may not be​

Diffuse + Specular colours:
This is the diffuse and specular reflective colours of the surface. This data generally doesn't need to be very precise - and physically makes no sense to go out of the [0,1] range. Represented in linear space, it is often simply stored as 8bit gamma space - as this is one of the best tradeoffs for size and perceptual quality.​

Emissive:
This is light emitting from a surface. This can be from a glowing surface, a lightmap or other baked lighting. Ideally precision should be as high as possible and the data range is arbitrary (hence RGBM 8bit encoding is a very popular choice in many g-buffers)​

Light Accumulation:
Dynamic lighting is added together. Typically this ignores diffuse, etc. The only important things are position/depth and normals. Output again is ideally as high precision and large a range as possible - hence this is often done in FP16 in a deferred renderer.​

Composite:
Here, the results of light accumulation is combined with diffuse, specular and emissive, etc. Once again, FP16 is ideal.​

Early Post FX:
This can sometimes include things like bloom, dof, warps, etc - things that benefit from high precision and range.​

Tone Map, Gamma Correction, etc:
The important one. This is where the linear space output of the render (composite, post, etc) is converted into a [0,1] range output preparing for display. This will often include gamma correction as a last step - and should include some form of tone mapping. Tone mapping is used to compress the image into a [0,1] displayable range - this can involve applying an exposure scale (eg eye adaption) and often times will compress the whites and blacks to boost displayable range without too much loss of detail (see uncharted presentations or this for examples)​

Late Post + Display:
Some post Fx can more easily be done with a fixed range (eg, the displayable range). This includes most complex forms of colour grading. Some games perform bloom or effects like dof and motion blur here.​

Green represents data that is ideally stored in high range - blue is [0,1]. Note that precision of this data may vary wildly.


Where a game places each effect in the pipeline will vary game by game - it's all based on tradeoffs made for memory, performance and meeting project goals. For example, roughly:

pipe2.png


So why say all this?
Because I'm trying to show that you cannot easily label a game as HDR.
For example, BF3 and KZ3 share similar g-buffer formats. They store emissive light in RGBM 8bit format. Technically this is pretty high dynamic range with moderate precision. They accumulate dynamic light and composite (to the best of my knowledge) in FP16.

What people are seeing in the KZ3 screen shots is (to my eye) a limitation of the colour grading and some of the extra post FX (I believe KZ3 applies the vignette effect post-tonemap in 8bit). This isn't an indication that when the sky was rendered it was limited to 8bit, it indicates when colourization and certain post FX occurred, it was limited to 8bit (to the best of my knowledge).

Technically, the best solution is a forward renderer that does everything in one go, in 32bpp on the shader hardware. :mrgreen:


...

As for the whole linear space / gamma space thing - I think it's important to realise that we don't perceive light in a linear way. Gamma space represents how we perceive light - ans is roughly a power of 2.4. What this means, is if we perceive that the brightness of something has doubled - it is actually emitting approximately 5.3x more light. Hence the very obvious realization that when mathematically calculating how light is accumulated, it must be done in linear space - then later converted into gamma space (for human perception). This also explains why it makes more sense to store most 8bit images in gamma space - as you get more perceptual precision.
 
Thanks for the post, Graham.

Edit:
I looked back at that slide image, which I thought was for Killzone 3, but is actually for Killzone 2. So, it looks like light accumulation in Killzone 2 was done LDR in gamma-space, but it seems that Killzone 3 has switched to Sony's "internal lighting solution", whatever that is, but likely some form of HDR for light accumulation. I guess fp16, or that LogLuv format.
 
Back
Top